points
1 year ago
| 3 comments
| HN
I have used python in production for years, multiple servers, multiple racks, and deployment has always been as simple as

./deploy.sh pull sync migrate seed restart

pull calls git pull, sync runs pipenv sync, migrate runs django migrate, seed runs django management command seed, restart calls systemctl --user restart

toomuchtodo
1 year ago
[-]
This was not my experience building infra for a startup heavily leveraging a Python monolith. It was painful AF (both when developing locally and deploying to VMs) and Docker made the deployment story palatable (build, push to hundreds of VMs, run).
reply
nurettin
1 year ago
[-]
Any details as to why?
reply
toomuchtodo
1 year ago
[-]
Virtual envs, dependency hell, etc. I just want a binary to build and push (previously to VMs, now to k8). Docker does that for Python.

Target simplicity, it’s the ultimate sophistication.

reply
nurettin
1 year ago
[-]
I target simplicity by having one virtual environment instead of many and synchronizing dependencies instead with the main branch instead of... hell?
reply
toomuchtodo
1 year ago
[-]
Have you tried deploying to hundreds or thousands of VMs? In my experience, managing container deployment and state is much easier, vs wrestling with inconsistent env state on compute for whatever reason.
reply
nurettin
1 year ago
[-]
For the life of me I don't understand how deploying to 10 machines is functionally different from deploying to a billion machines as the process is exactly the same. Unless you sabotaged your deployment machines with some manual meddling, it's the same with Docker images and Ansible.
reply
pclmulqdq
1 year ago
[-]
When you run a single, simple Python service, it's fantastic. When you scale your Python, it's an awful dependency hell.
reply
nurettin
1 year ago
[-]
Can you please explain how this dependency hell manifests itself if you are using a dependency resolver such as poetry or pipenv which locates the appropriate maximum version? Once you've locked the appropriate versions, you just do pipenv sync in prod like I said.
reply
pclmulqdq
1 year ago
[-]
Yes - you pull in two third party libraries that need two incompatible versions of a mutual dependency. That library (the common dependency) is not backwards compatible due to deprecated functions or whatnot. That dependency could also be Python itself.

This is an incredibly common occurrence, especially with ML systems which are not designed by people with an engineering mindset.

reply
nurettin
1 year ago
[-]
Not getting the package versions you want is a local dev problem and not specific for python at all. I will remind you that this thread is about deploying to prod and dispelling vague assertions of nebulous problems that are supposedly solved by docker.
reply
sofixa
1 year ago
[-]
That only works until you have a conflicting dependency (same codebase: a.py imports libraryA which needs dep>3.0.2, and b.py imports libraryB that needs dep==1.8.3). Then you're screwed.
reply
nurettin
1 year ago
[-]
no, pipenv install will resolve that and use the appropriate max version.
reply
pclmulqdq
1 year ago
[-]
It will use the max version, but that version will likely have deprecated things that the other dependency relies on in its chosen version. Specifying that you must use an old version of a library is how a lot of Python maintainers resolve deprecations in their dependencies.
reply