diff --git a/README.md b/README.md index bc4daee7..e826faa3 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ Goiardi Goiardi is an implementation of the Chef server (http://www.chef.io) written in Go. It can either run entirely in memory with the option to save and load the in-memory data and search indexes to and from disk, drawing inspiration from -chef-zero, or it can use MySQL or PostgreSQL as its storage backend. +chef-zero, or it can use MariaDB, MySQL or PostgreSQL as its storage backend. DOCUMENTATION ------------- diff --git a/docs/features/berks.rst b/docs/features/berks.rst index ce67976c..1df3ad1d 100644 --- a/docs/features/berks.rst +++ b/docs/features/berks.rst @@ -5,4 +5,4 @@ Berks Universe Endpoint Starting with version 0.6.1, goiardi supports the berks-api ``/universe`` endpoint. It returns a JSON list of all the cookbooks and their versions that have been uploaded to the server, along with the URL and dependencies of each version. The requester will need to be properly authenticated with the server to use the universe endpoint. -The universe endpoint works with all backends, but with a ridiculous number of cookbooks (like, loading all 6000+ cookbooks in the Chef Supermarket), the Postgres implementation is able to take advantage of some Postgres specific functionality to generate that page significantly faster than the in-mem or MySQL implementations. It's not too bad, but on my laptop at home goiardi could generate /universe against the full 6000+ cookbooks of the supermarket in ~350 milliseconds, while MySQL took about 1 second and in-mem took about 1.2 seconds. Normal functionality is OK, but if you have that many cookbooks and expect to use the universe endpoint often you may wish to consider using Postgres. +The universe endpoint works with all backends, but with a ridiculous number of cookbooks (like, loading all 6000+ cookbooks in the Chef Supermarket), the Postgres implementation is able to take advantage of some Postgres specific functionality to generate that page significantly faster than the in-mem or MySQL/MariaDB implementations. It's not too bad, but on my laptop at home goiardi could generate /universe against the full 6000+ cookbooks of the supermarket in ~350 milliseconds, while MySQL took about 1 second and in-mem took about 1.2 seconds. Normal functionality is OK, but if you have that many cookbooks and expect to use the universe endpoint often you may wish to consider using Postgres. diff --git a/docs/features/data.rst b/docs/features/data.rst index 06e3cc51..5c647283 100644 --- a/docs/features/data.rst +++ b/docs/features/data.rst @@ -3,7 +3,7 @@ Import and Export of Data ========================= -Goiardi can now import and export its data in a JSON file. This can help both when upgrading, when the on-disk data format changes between releases, and to convert your goiardi installation from in-memory to MySQL (or vice versa). The JSON file has a version number set (currently 1.0), so that in the future if there is some sort of incompatible change to the JSON file format the importer will be able to handle it. +Goiardi can now import and export its data in a JSON file. This can help both when upgrading, when the on-disk data format changes between releases, and to convert your goiardi installation from in-memory to MySQL/MariaDB (or vice versa). The JSON file has a version number set (currently 1.0), so that in the future if there is some sort of incompatible change to the JSON file format the importer will be able to handle it. Before importing data, you should back up any existing data and index files (and take a snapshot of the SQL db, if applicable) if there's any reason you might want it around later. After exporting, you may wish to hold on to the old installation data until you're satisfied that the import went well. diff --git a/docs/features/persistence.rst b/docs/features/persistence.rst index 828f9143..9d4fdc50 100644 --- a/docs/features/persistence.rst +++ b/docs/features/persistence.rst @@ -7,10 +7,10 @@ There are two general options that can be set for either database: ``--db-pool-s It should go without saying that these options don't do much if you aren't using one of the SQL backends. -Of the two databases available, PostgreSQL is the better supported and recommended configuration. MySQL still works, of course, but it can't take advantage of some of the very helpful Postgres features. +Of the two databases available, PostgreSQL is the better supported and recommended configuration. MySQL (or MariaDB) still works, of course, but it can't take advantage of some of the very helpful Postgres features. -MySQL mode ----------- +MySQL / MariaDB mode +-------------------- Goiardi can use MySQL to store its data, instead of keeping all its data in memory (and optionally freezing its data to disk for persistence). diff --git a/docs/features/search.rst b/docs/features/search.rst index 1bcaad73..8f412fdf 100644 --- a/docs/features/search.rst +++ b/docs/features/search.rst @@ -10,7 +10,7 @@ Additional different search backends are now a possibility as well; goiardi sear Ersatz Solr Search ------------------ -Nothing special needs to be done to use this search. It remains the default search implementation, and the only choice for the in-memory/file based storage and MySQL. It works well for smaller installations, but when you get in the neighborhood of hundreds of nodes it begins to get bogged down. +Nothing special needs to be done to use this search. It remains the default search implementation, and the only choice for the in-memory/file based storage and MySQL/MariaDB. It works well for smaller installations, but when you get in the neighborhood of hundreds of nodes it begins to get bogged down. Postgres Search --------------- diff --git a/docs/features/secrets.rst b/docs/features/secrets.rst index 6284e7e1..d844416f 100644 --- a/docs/features/secrets.rst +++ b/docs/features/secrets.rst @@ -28,7 +28,7 @@ Populating A new goiardi installation won't need to do anything special to use vault for secrets - assuming everything's set up properly, new clients and users will work as expected. -Existing goiardi installations will need to transfer their various secrets into vault. A persistent but not DB backed goiardi installation will need to export and import all of goiardi's data. With MySQL or Postgres, it's much simpler. +Existing goiardi installations will need to transfer their various secrets into vault. A persistent but not DB backed goiardi installation will need to export and import all of goiardi's data. With MySQL/MariaDB or Postgres, it's much simpler. For each secret, get the key or password hash from the database for each object and make a JSON file like this: :: diff --git a/docs/index.rst b/docs/index.rst index 7f337d6e..fac8811b 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -6,7 +6,7 @@ Welcome to goiardi's documentation! =================================== -Goiardi is an implementation of the Chef server (http://www.chef.io) written in Go. It can either run entirely in memory with the option to save and load the in-memory data and search indexes to and from disk, drawing inspiration from chef-zero, or it can use MySQL or PostgreSQL as its storage backend. Cookbooks can either be stored locally, or optionally in Amazon S3 (or a compatible service). +Goiardi is an implementation of the Chef server (http://www.chef.io) written in Go. It can either run entirely in memory with the option to save and load the in-memory data and search indexes to and from disk, drawing inspiration from chef-zero, or it can use MySQL, MariaDB or PostgreSQL as its storage backend. Cookbooks can either be stored locally, or optionally in Amazon S3 (or a compatible service). Like all software, it is a work in progress. Goiardi now, though, should have all the functionality of the open source Chef Server, plus some extras like reporting, event logging, and a Chef Push-like feature called "shovey". It does not support other Enterprise Chef type features like organizations at this time. When used, knife works, and chef-client runs complete successfully. Almost all chef-pendant tests successfully successfully run, with a few disagreements about error messages that don't impact the clients. It does pretty well against the official chef-pedant, but because goiardi handles some authentication matters a little differently than the official chef-server, there is also a fork of chef-pedant located at https://github.com/ctdk/chef-pedant that's more custom tailored to goiardi. diff --git a/docs/installation.rst b/docs/installation.rst index 00cc928e..0da69df7 100644 --- a/docs/installation.rst +++ b/docs/installation.rst @@ -101,11 +101,11 @@ Currently available command line and config file options:: Useful when goiardi is sitting behind a reverse proxy that uses SSL, but is communicating with the proxy over HTTP. [$GOIARDI_HTTPS_URLS] - --disable-webui If enabled, disables connections and logins to + --disable-webui. If enabled, disables connections and logins to goiardi over the webui interface. [$GOIARDI_DISABLE_WEBUI] - --use-mysql Use a MySQL database for data storage. Configure - database options in the config file. + --use-mysql. Use a MySQL/MariaDB database for data storage. + Configure database options in the config file. [$GOIARDI_USE_MYSQL] --use-postgresql Use a PostgreSQL database for data storage. Configure database options in the config file. @@ -239,7 +239,7 @@ Currently available command line and config file options:: disable sandbox purging. [$GOIARDI_PURGE_SANDBOXES_AFTER] - MySQL connection options (requires --use-mysql): + MySQL/MariaDB connection options (requires --use-mysql): --mysql-username= MySQL username [$GOIARDI_MYSQL_USERNAME] --mysql-password= MySQL password [$GOIARDI_MYSQL_PASSWORD] --mysql-protocol= MySQL protocol (tcp or unix) diff --git a/docs/upgrading.rst b/docs/upgrading.rst index 75e91709..2992813b 100644 --- a/docs/upgrading.rst +++ b/docs/upgrading.rst @@ -5,7 +5,7 @@ Upgrading Upgrading goiardi is generally a straightforward process. Usually all you should need to do is get the new sources and rebuild (using the ``-u`` flag when running ``go get`` to update goiardi is a good idea to ensure the dependencies are up to date), or download the appropriate new binary. However, sometimes a little more work is involved. Check the release notes for the new release in question for any extra steps that may need to be done. If you're running one of the SQL backends, you may need to apply database patches (either with sqitch or by hand), and in-memory mode especially may require using the data import/export functionality to dump and load your chef data between upgrades if the binary save file compatibility breaks between releases. However, while it should not happen often, occasionally more serious preparation will be needed before upgrading. It won't happen without a good reason, and the needed steps will be clearly outlined to make the process as painless as possible. -As a special note, if you are upgrading from any release prior to 0.6.1-pre1 to 0.7.0 and are using one of the SQL backends, the upgrade is one of the special cases. Between those releases the way the complex data structures associated with cookbook versions, nodes, etc. changed from using gob encoding to json encoding. It turns out that while gob encoding is indeed faster than json (and was in all the tests I had thrown at it) in the usual case, in this case json is actually significantly faster, at least once there are a few thousand coobkooks in the database. In-memory datastore (including file-backed in-memory datastore) users are advised to dump and reload their data between upgrading from <= 0.6.1-pre1 and 0.7.0, but people using either MySQL or Postgres *have* to do these things: +As a special note, if you are upgrading from any release prior to 0.6.1-pre1 to 0.7.0 and are using one of the SQL backends, the upgrade is one of the special cases. Between those releases the way the complex data structures associated with cookbook versions, nodes, etc. changed from using gob encoding to json encoding. It turns out that while gob encoding is indeed faster than json (and was in all the tests I had thrown at it) in the usual case, in this case json is actually significantly faster, at least once there are a few thousand coobkooks in the database. In-memory datastore (including file-backed in-memory datastore) users are advised to dump and reload their data between upgrading from <= 0.6.1-pre1 and 0.7.0, but people using either MySQL, MariaDB or Postgres *have* to do these things: * Export their goiardi server's data with the ``-x`` flag. * Either revert all changes to the db with sqitch, then redeploy, or drop the database manually and recreate it from either the sqitch patches or the full table dump of the release (provided starting with 0.7.0)