Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative deployments with Docker (Traefik.io) #22

Closed
poVoq opened this issue Sep 10, 2018 · 18 comments
Closed

Alternative deployments with Docker (Traefik.io) #22

poVoq opened this issue Sep 10, 2018 · 18 comments
Labels
ops Docker, nginx, ops to deploy Central

Comments

@poVoq
Copy link

poVoq commented Sep 10, 2018

I am trying to understand (sorry not a docker expert) how I can make this work with my Traefik.io based host (also using portainer so docker compose isn't preferred).

Looking at the docker-compose file the postgres database is easy to replicate, and I think the actual ODK-central service should also work. But from then on I am a bit lost...

However what is this "mail" service needed for (seems to be an smtp server?) and how can I avoid running an instance of Nginx?

Traefik.io has me pretty much covered for all the reverse-proxy/https needs, and NodeJS usually comes with a build-in webserver, no? So which port is exposed in the "service" container that I could point Traefik.io to?

Thanks for the help!

@poVoq
Copy link
Author

poVoq commented Sep 10, 2018

Related and answers some questions already:
#17

@issa-tseng
Copy link
Member

We are a small team and the project is young; our official support is for the whole stack as-is, with docker-compose managing the entire infrastructure. We cannot guarantee that these things will work well now or later once you break out of that packaging, nor that we won't make some change that might break your custom setup.

That said, I'm happy to offer answers where I can.

As you saw on the other thread, mail is just an SMTP relay. Any SMTP relay will work if you just point the service configuration at it by IP/DNS. Same goes for postgres, though the more you stray on postgres the less likely the built-in backup solution is to work for you, and the more likely you'll have to build your own.

The nginx instance is in charge of serving the static html/js/css/etc assets that comprise the frontend. So you can avoid running it somewhat, but you'll need to supply some alternative way to serve those files. The backend service does not do this.

@poVoq
Copy link
Author

poVoq commented Nov 28, 2018

Is it possible to pass the postgres service address:port also via an ENV variable instead of the command in the composer file? As stated above I am trying to integrate this in an existing infrastructure and this would be helpful in doing so.

Also autostarting the ./start-odk.sh by default instead of relying on that custom command in the compose would probably help.

@issa-tseng
Copy link
Member

hm. i am wary of passing more and more things through env vars, since eventually the whole thing will be a mess of env vars and it'll be really difficult to eg update config structures, etc, since everyone will just have a pile of customized variables. what's the issue with modifying the config file?

as for start-odk.sh, is it the wait-for-it that is the problem, or the location of the command within the compose file?

@poVoq
Copy link
Author

poVoq commented Nov 29, 2018

It's a defacto standard for docker containers to be configured through env variables and that they include autostart scripts to be fullyselfcontained.

Your docker compose setup works fine if you spin it up on a dedicated machine, but then the question comes up why use Docker in the first place?

Any other more complex container orchestration setup will require scrapping that docker-compose file of yours and find another way to launch the container. And in that case following standards and avoiding having to pass custom commands will really help.

@issa-tseng
Copy link
Member

yeah—our goals are not aligned with the typical docker project here, is the problem.

our purpose in using docker is to have a well-known, highly-regimented, multi-platform install process for our users to self-install the software, without having to resort to really heavy and large things like full VM images.

ergo, docker-compose works really well for us. we want to provide easy deployment to single machines. even our very largest installations barely hit the point where distributing the load across maybe two machines is a remote concern, let alone service orchestration. 99.99% of ODK installs just don’t actually see anywhere near that much load.

(hopefully this answers your question: why use docker in the first place?)

i am not an experienced docker user, and i fully recognize that i do not understand what all the generally accepted best practices are. please do let me know when you see problems that can be resolved, as you are doing. i appreciate your notes. though, please also be nice to me, because many of the decisions in this repository are made organically and on minimal experience; they are not wilfull attempts to cut against the grain.

but on the other hand, please do also recognize that again, our needs and the sorts of things we are trying to accomplish are very different from the typical docker project.

  • our users (including the people installing the software) are mostly non-technical.
  • our deployments are typically not very load intensive and features like service orchestration are a very very distant concern.
  • one of the absolute highest order bits for us is foolproofedness, and as a result having a lot of twiddles in eg env vars is something we will reflexively shy away from, as they add surface area where things go wrong, which increases support requests on the forum, which decreases the amount of time we have to spend on improving the actual software itself.
  • relatedly, having more twiddles and knobs makes it very difficult to maintain platform coherency moving forward across what will eventually be a great many thousands of instances of amateur-installed-and-maintained server software.

in light of all that, i will sign off on two thoughts:

  1. again, i ask whether the issue you have with startup is the wait-for-it script, or if it is the fact that the command is within the docker-compose file rather than the actual dockerfile itself. here i do not know what the best practice is so i emulated whatever sample i was using.
  2. if you have an idea for a more advanced configuration that would better suit power users and other environments, this is an open source project and i would very much welcome a second/alternative set of configurations that people could choose if they wish not to be limited by our more guardraily default.

thanks for your feedback and thoughts so far!

@poVoq
Copy link
Author

poVoq commented Nov 29, 2018

OK, I get your reasoning, but I still think you have it a bit confused. If your target is to only have an easy install script on a fresh dedicated machine that Docker is really overkill.

But even if you use your compose script then what happens if you need to restart the service container only? If the service was configured to autostart via systemd (can easily include a small start up delay) and the database configuration was an env variable that that would be extremely easy and intuitive.

But with your setup you have to basically reinstall the entire docker setup just to restart the service container, or alternatively fiddle with some really non-standard custom commands trying to manually restart.

@issa-tseng
Copy link
Member

actually i think the docker setup is a strength here.

again, i think you're coming at the question from the angle of somebody who is comfortable and familiar with these sorts of tools. having a "simple install script" sounds nice, but the people will end up having to troubleshoot different linux environments, or whatever versions of software happen to be in the distribution package repository, and then things start to drift. and assuming they actually even manage to troubleshoot these things and install, then suddenly they are on a nonstandard setup. what happens when a new version of Central comes out?

with the docker setup, it's a safeguarded, predictable environment and we can provide some relatively simple commands for eg upgrading:

git pull
git submodule update
docker-compose build
systemctl restart docker-compose@central

and that's it. and we can be very very sure it'll work just fine.

and in contrast to eg a full VM, here we have a relatively clean way to issue server- or client- only minor patches for particular users, which we have already done with success.

@issa-tseng
Copy link
Member

(and as for restarting the service container only: under what scenario? if something goes wrong they can just kick the whole setup if they don't want to learn the individual commands. a small delay is not going to work every time, and will lead to confusion and support tickets.

we can move the check for postgres being available into the javascript of the service instead of having it be the wait-for-it script, but while i would welcome a patch to do this it's not something i have time for at the moment.

and as for the env variables, i feel like my answer has not changed there. if you and i, and people like you and i, were the primary consumers of this project, i would not have a problem making this change. but we are not, and so i do.)

@poVoq
Copy link
Author

poVoq commented Nov 29, 2018

IMHO a .deb file with the instructions to install on a fresh Ubuntu server installation would work better for the use-case you seem to have in mind.

But ok... try to look at it from a different perspective: at least at this stage in development your "target group" is probably imaginary at best (and I doubt there ever will be people interested in setting up their own ODK-Central server who have so little background knowledge and are the same time not afraid to try doing it).

But on the other hand the way this is set up right now makes it really difficult for potential more knowledgeable testers and maybe contributors with an existing Docker setup to actually try it out (a big strength of "normal" Docker containers) without specifically renting a dedicated VPS or messing around with custom docker commands.

@issa-tseng
Copy link
Member

issa-tseng commented Nov 29, 2018

i guess we will just have to agree to disagree on this count. none of what i am saying is hypothetical, by the way—it's all things we have learned the hard way from ten years of running this project.

as an example, i've already helped somebody (successfully!) install ODK Central, who needed help because they didn't understand that you press q to exit some less-like log views, so they thought their server had hung. they pressed q as instructed, ran the command to start the service, and everything is great now. so these users are definitely not imaginary. (and now try to imagine such a user trying to figure out what ubuntu is, and what to do with a .deb file.)

again, i am very happy to make reasonable changes that don't cut against our primary user base/use case, and i am very happy to accept an alternative set of scripts that are more open-ended. i am still very happy to make or accept changes around how the service container starts.

(and honestly, if you're an advanced user you barely need any of this stuff anyway: it's a node service, a postgres database, a pile of html/resources to serve statically, and somewhere or another a mail transport.)

@issa-tseng issa-tseng mentioned this issue Nov 29, 2018
@yanokwa
Copy link
Member

yanokwa commented Nov 29, 2018

I think @issa-tseng's approach of accepting an alternative set of scripts that are more open-ended is a great way go to. @poVoq, can you clone the project and make whatever proposed adjustments you had in mind in a branch? That'd allow us to see exactly what you had in mind and decide based on that.

@issa-tseng issa-tseng added the ops Docker, nginx, ops to deploy Central label Apr 17, 2019
@callawaywilson
Copy link

I just wanted to contribute to this older issue since it's still open: I've put together an alternative build (with a modified startup script) that puts together Central on one image with 1 port exposed and no DB. It's a bit more appropriate for a serverless deployment like I think @poVoq is talking about. It still needs nginx to serve the client and the pyxform server, but it looks like a single service.

My target use case is running clustered on Amazon Elastic Container Service behind a load balancer that manages the certificate and the DB on a shared RDS server.

https://github.com/bastiondev/odkcentralcontainer

@issa-tseng
Copy link
Member

hey @callawaywilson, that looks nice. thanks for letting us know about it! would you feel comfortable if we mentioned it in the documentation as an option? you might get some support questions and issue reports as a result.

@callawaywilson
Copy link

Hi @issa-tseng, it's still a bit half-baked right now for a generic use case, but I'm hoping to streamline and improve it very soon. Could I get back to you when it's at a supportable level? I still need to solve the user bootstrap issue when you don't have access to the container. Once done I'm hoping it will publish the image to Docker Hub and have a 'Deploy to Heroku' button.

@issa-tseng
Copy link
Member

oh yes, absolutely! thank you :)

@callawaywilson
Copy link

@issa-tseng, I think that the alternative container is a bit more presentable now with Heroku deployability (no deploy on Heroku button because it doesn't support submodules). If you want to point people to it, feel free!

I'll try to keep it up to date pointing to tagged releases of Central. It modifies the config.json, nginx configuration, startup script, and has a couple of JS functions that reference some Central code, so it would need to be tested for each update.

@matthew-white
Copy link
Member

Thanks everyone for the useful discussion in this issue. I've filed getodk/docs#1880 so that we can link to @callawaywilson's repo from the user docs (better late than never!). Thanks again for putting that together, @callawaywilson. The Central team isn't planning other follow-up work from this issue, so I'm going to go ahead and close it out.

@matthew-white matthew-white closed this as not planned Won't fix, can't repro, duplicate, stale Nov 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ops Docker, nginx, ops to deploy Central
Projects
None yet
Development

No branches or pull requests

5 participants