Skip to content

Relocating CouchDB Data

victorskl edited this page Apr 29, 2017 · 1 revision

Issue

  • Your App using CouchDB crash having log message something like this.
couchdb.http.ServerError unknown_error 500
  • You sure that your App has not any particular issue. So what?

  • Here in our CouchDB is cluster setup. Login in each node. Check the disk usage.

root@r-d3i1sr7z-0:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           799M   83M  716M  11% /run
/dev/vda1       9.9G  7.4G  2.0G  79% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vdb         59G   52M   56G   1% /mnt
tmpfs           799M     0  799M   0% /run/user/1000
root@r-d3i1sr7z-0:~# du -hs /opt/couchdb/
4.8G    /opt/couchdb/

root@r-d3i1sr7z-1:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           799M   81M  718M  11% /run
/dev/vda1       9.9G  9.4G     0 100% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vdb         59G   52M   56G   1% /mnt
tmpfs           799M     0  799M   0% /run/user/1000
root@r-d3i1sr7z-1:~# du -hs /opt/couchdb/
7.8G    /opt/couchdb/

root@r-d3i1sr7z-2:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           799M   81M  718M  11% /run
/dev/vda1       9.9G  9.4G     0 100% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vdb         59G   52M   56G   1% /mnt
tmpfs           799M     0  799M   0% /run/user/1000
root@r-d3i1sr7z-2:~#
root@r-d3i1sr7z-2:~# du -hs /opt/couchdb/
7.7G    /opt/couchdb/
  • Basically two nodes have full up 100% on root directory. That's because we deploy CouchDB into /opt which mounted as the same root /dev/vda1 device - which only have 10GB.

  • Why we ended this? Because we have been checking thought Fauxton UI and seeing each database size time to time. And then, it seems this database size reporting does not reflect to the actual disk usage.

  • We have some disk space at /mnt as mounted from device /dev/vdb 56GB.

Relocating

  • First stop couchdb service.
root@r-d3i1sr7z-0:~# systemctl stop couchdb
root@r-d3i1sr7z-0:~# systemctl status couchdb|grep active
   Active: inactive (dead) since Sat 2017-04-29 07:22:13 UTC; 38s ago
  • Move CouchDB deployment at /opt to /mnt and change couchdb user home
root@r-d3i1sr7z-0:~# time usermod -m -d /mnt/couchdb couchdb
  • Or previous step is an equivalent of doing these two commands. Skip if you have done above step.
root@r-d3i1sr7z-0:~# time mv /opt/couchdb/ /mnt
root@r-d3i1sr7z-0:~# usermod -d /mnt/couchdb couchdb
  • Now need to update systemd couchdb.service script. First check opt exist.
root@r-d3i1sr7z-0:~# cat /etc/systemd/system/couchdb.service|grep opt
ExecStart=/opt/couchdb/bin/couchdb -o /dev/stdout -e /dev/stderr
  • Search opt and replace mnt. You may use vi, if you like.
root@r-d3i1sr7z-0:~# sed -i 's/opt/mnt/g' /etc/systemd/system/couchdb.service
  • Make sure it get update.
root@r-d3i1sr7z-0:~# cat /etc/systemd/system/couchdb.service|grep opt
root@r-d3i1sr7z-0:~# cat /etc/systemd/system/couchdb.service|grep mnt
ExecStart=/mnt/couchdb/bin/couchdb -o /dev/stdout -e /dev/stderr
  • Reload systemd daemon
root@r-d3i1sr7z-0:~# systemctl daemon-reload
  • Start CouchDB
root@r-d3i1sr7z-0:~# systemctl start couchdb
  • Check CouchDB status
root@r-d3i1sr7z-0:~# systemctl status couchdb
● couchdb.service - CouchDB Service
   Loaded: loaded (/etc/systemd/system/couchdb.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2017-04-29 07:40:16 UTC; 3s ago
 Main PID: 20223 (beam.smp)
   CGroup: /system.slice/couchdb.service
           ├─20223 /mnt/couchdb/bin/../erts-7.3/bin/beam.smp -K true -A 16 -Bd -- -root /mnt/couchdb/bin/.. -progname couchdb -- -home /mnt/couchdb -- -boot /mnt/couchdb/bin/../releases/2.0.0/couchdb -name [email protected] -setcooki
           ├─20235 /mnt/couchdb/bin/../erts-7.3/bin/epmd -daemon
           ├─20260 sh -s disksup
           ├─20262 /mnt/couchdb/bin/../lib/os_mon-2.4/priv/bin/memsup
           ├─20263 /mnt/couchdb/bin/../lib/os_mon-2.4/priv/bin/cpu_sup
           ├─20264 inet_gethost 4
           ├─20265 inet_gethost 4
           └─20266 inet_gethost 4

Apr 29 07:40:18 r-d3i1sr7z-0 couchdb[20223]: [info] 2017-04-29T07:40:18.883255Z [email protected] <0.7.0> -------- Application oauth started on node '[email protected]'
Apr 29 07:40:18 r-d3i1sr7z-0 couchdb[20223]: [info] 2017-04-29T07:40:18.924000Z [email protected] <0.224.0> -------- Apache CouchDB 2.0.0 is starting.
Apr 29 07:40:18 r-d3i1sr7z-0 couchdb[20223]: [info] 2017-04-29T07:40:18.924243Z [email protected] <0.225.0> -------- Starting couch_sup
Apr 29 07:40:19 r-d3i1sr7z-0 couchdb[20223]: [info] 2017-04-29T07:40:19.537781Z [email protected] <0.224.0> -------- Apache CouchDB has started. Time to relax.
Apr 29 07:40:19 r-d3i1sr7z-0 couchdb[20223]: [info] 2017-04-29T07:40:19.539026Z [email protected] <0.224.0> -------- Apache CouchDB has started on http://0.0.0.0:9586/
Apr 29 07:40:19 r-d3i1sr7z-0 couchdb[20223]: [info] 2017-04-29T07:40:19.540456Z [email protected] <0.7.0> -------- Application couch started on node '[email protected]'
Apr 29 07:40:19 r-d3i1sr7z-0 couchdb[20223]: [info] 2017-04-29T07:40:19.541183Z [email protected] <0.7.0> -------- Application ets_lru started on node '[email protected]'
Apr 29 07:40:19 r-d3i1sr7z-0 couchdb[20223]: [info] 2017-04-29T07:40:19.590238Z [email protected] <0.7.0> -------- Application rexi started on node '[email protected]'
Apr 29 07:40:19 r-d3i1sr7z-0 couchdb[20223]: [info] 2017-04-29T07:40:19.771402Z [email protected] <0.7.0> -------- Application mem3 started on node '[email protected]'
Apr 29 07:40:19 r-d3i1sr7z-0 couchdb[20223]: [info] 2017-04-29T07:40:19.771724Z [email protected] <0.7.0> -------- Application fabric started on node '[email protected]'
  • Check the disk usage
root@r-d3i1sr7z-0:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           799M   81M  718M  11% /run
/dev/vda1       9.9G  2.9G  6.6G  31% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vdb         59G  4.9G   52G   9% /mnt
tmpfs           799M     0  799M   0% /run/user/1000
  • Repeat steps to the rest of nodes, one by one. So, you do not have a downtime; in some sense.

  • Alternatively you may just move {COUCHDB_HOME}/data directory only and update {COUCHDB_HOME}/etc/local.ini configuration to point new data directory. This might be more suited to those who using CouchDB distribution package installation i.e. apt/yum. If you build CouchDB from source, I like moving the whole directory as steps above. The choice is yours!

Clone this wiki locally