-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative deployments with Docker (Traefik.io) #22
Comments
Related and answers some questions already: |
We are a small team and the project is young; our official support is for the whole stack as-is, with docker-compose managing the entire infrastructure. We cannot guarantee that these things will work well now or later once you break out of that packaging, nor that we won't make some change that might break your custom setup. That said, I'm happy to offer answers where I can. As you saw on the other thread, mail is just an SMTP relay. Any SMTP relay will work if you just point the The nginx instance is in charge of serving the static html/js/css/etc assets that comprise the frontend. So you can avoid running it somewhat, but you'll need to supply some alternative way to serve those files. The backend service does not do this. |
Is it possible to pass the postgres service address:port also via an ENV variable instead of the command in the composer file? As stated above I am trying to integrate this in an existing infrastructure and this would be helpful in doing so. Also autostarting the ./start-odk.sh by default instead of relying on that custom command in the compose would probably help. |
hm. i am wary of passing more and more things through env vars, since eventually the whole thing will be a mess of env vars and it'll be really difficult to eg update config structures, etc, since everyone will just have a pile of customized variables. what's the issue with modifying the config file? as for |
It's a defacto standard for docker containers to be configured through env variables and that they include autostart scripts to be fullyselfcontained. Your docker compose setup works fine if you spin it up on a dedicated machine, but then the question comes up why use Docker in the first place? Any other more complex container orchestration setup will require scrapping that docker-compose file of yours and find another way to launch the container. And in that case following standards and avoiding having to pass custom commands will really help. |
yeah—our goals are not aligned with the typical docker project here, is the problem. our purpose in using docker is to have a well-known, highly-regimented, multi-platform install process for our users to self-install the software, without having to resort to really heavy and large things like full VM images. ergo, docker-compose works really well for us. we want to provide easy deployment to single machines. even our very largest installations barely hit the point where distributing the load across maybe two machines is a remote concern, let alone service orchestration. 99.99% of ODK installs just don’t actually see anywhere near that much load. (hopefully this answers your question: why use docker in the first place?) i am not an experienced docker user, and i fully recognize that i do not understand what all the generally accepted best practices are. please do let me know when you see problems that can be resolved, as you are doing. i appreciate your notes. though, please also be nice to me, because many of the decisions in this repository are made organically and on minimal experience; they are not wilfull attempts to cut against the grain. but on the other hand, please do also recognize that again, our needs and the sorts of things we are trying to accomplish are very different from the typical docker project.
in light of all that, i will sign off on two thoughts:
thanks for your feedback and thoughts so far! |
OK, I get your reasoning, but I still think you have it a bit confused. If your target is to only have an easy install script on a fresh dedicated machine that Docker is really overkill. But even if you use your compose script then what happens if you need to restart the service container only? If the service was configured to autostart via systemd (can easily include a small start up delay) and the database configuration was an env variable that that would be extremely easy and intuitive. But with your setup you have to basically reinstall the entire docker setup just to restart the service container, or alternatively fiddle with some really non-standard custom commands trying to manually restart. |
actually i think the docker setup is a strength here. again, i think you're coming at the question from the angle of somebody who is comfortable and familiar with these sorts of tools. having a "simple install script" sounds nice, but the people will end up having to troubleshoot different linux environments, or whatever versions of software happen to be in the distribution package repository, and then things start to drift. and assuming they actually even manage to troubleshoot these things and install, then suddenly they are on a nonstandard setup. what happens when a new version of Central comes out? with the docker setup, it's a safeguarded, predictable environment and we can provide some relatively simple commands for eg upgrading:
and that's it. and we can be very very sure it'll work just fine. and in contrast to eg a full VM, here we have a relatively clean way to issue server- or client- only minor patches for particular users, which we have already done with success. |
(and as for restarting the service container only: under what scenario? if something goes wrong they can just kick the whole setup if they don't want to learn the individual commands. a small delay is not going to work every time, and will lead to confusion and support tickets. we can move the check for postgres being available into the javascript of the service instead of having it be the and as for the env variables, i feel like my answer has not changed there. if you and i, and people like you and i, were the primary consumers of this project, i would not have a problem making this change. but we are not, and so i do.) |
IMHO a .deb file with the instructions to install on a fresh Ubuntu server installation would work better for the use-case you seem to have in mind. But ok... try to look at it from a different perspective: at least at this stage in development your "target group" is probably imaginary at best (and I doubt there ever will be people interested in setting up their own ODK-Central server who have so little background knowledge and are the same time not afraid to try doing it). But on the other hand the way this is set up right now makes it really difficult for potential more knowledgeable testers and maybe contributors with an existing Docker setup to actually try it out (a big strength of "normal" Docker containers) without specifically renting a dedicated VPS or messing around with custom docker commands. |
i guess we will just have to agree to disagree on this count. none of what i am saying is hypothetical, by the way—it's all things we have learned the hard way from ten years of running this project. as an example, i've already helped somebody (successfully!) install ODK Central, who needed help because they didn't understand that you press again, i am very happy to make reasonable changes that don't cut against our primary user base/use case, and i am very happy to accept an alternative set of scripts that are more open-ended. i am still very happy to make or accept changes around how the (and honestly, if you're an advanced user you barely need any of this stuff anyway: it's a node service, a postgres database, a pile of html/resources to serve statically, and somewhere or another a mail transport.) |
I think @issa-tseng's approach of accepting an alternative set of scripts that are more open-ended is a great way go to. @poVoq, can you clone the project and make whatever proposed adjustments you had in mind in a branch? That'd allow us to see exactly what you had in mind and decide based on that. |
I just wanted to contribute to this older issue since it's still open: I've put together an alternative build (with a modified startup script) that puts together Central on one image with 1 port exposed and no DB. It's a bit more appropriate for a serverless deployment like I think @poVoq is talking about. It still needs nginx to serve the client and the pyxform server, but it looks like a single service. My target use case is running clustered on Amazon Elastic Container Service behind a load balancer that manages the certificate and the DB on a shared RDS server. |
hey @callawaywilson, that looks nice. thanks for letting us know about it! would you feel comfortable if we mentioned it in the documentation as an option? you might get some support questions and issue reports as a result. |
Hi @issa-tseng, it's still a bit half-baked right now for a generic use case, but I'm hoping to streamline and improve it very soon. Could I get back to you when it's at a supportable level? I still need to solve the user bootstrap issue when you don't have access to the container. Once done I'm hoping it will publish the image to Docker Hub and have a 'Deploy to Heroku' button. |
oh yes, absolutely! thank you :) |
@issa-tseng, I think that the alternative container is a bit more presentable now with Heroku deployability (no deploy on Heroku button because it doesn't support submodules). If you want to point people to it, feel free! I'll try to keep it up to date pointing to tagged releases of Central. It modifies the config.json, nginx configuration, startup script, and has a couple of JS functions that reference some Central code, so it would need to be tested for each update. |
Thanks everyone for the useful discussion in this issue. I've filed getodk/docs#1880 so that we can link to @callawaywilson's repo from the user docs (better late than never!). Thanks again for putting that together, @callawaywilson. The Central team isn't planning other follow-up work from this issue, so I'm going to go ahead and close it out. |
I am trying to understand (sorry not a docker expert) how I can make this work with my Traefik.io based host (also using portainer so docker compose isn't preferred).
Looking at the docker-compose file the postgres database is easy to replicate, and I think the actual ODK-central service should also work. But from then on I am a bit lost...
However what is this "mail" service needed for (seems to be an smtp server?) and how can I avoid running an instance of Nginx?
Traefik.io has me pretty much covered for all the reverse-proxy/https needs, and NodeJS usually comes with a build-in webserver, no? So which port is exposed in the "service" container that I could point Traefik.io to?
Thanks for the help!
The text was updated successfully, but these errors were encountered: