Its a program mormons use, old version of soaker.
People use that shit because of all the environment dependencies for different ml shit is different. Sometimes you get one working on gpu, but it breaks the others. So docker prevents that. Also one click install... i have no desire to learn about that shit.
Its a program mormons use, old version of soaker.
People use that shit because of all the environment dependencies for different ml shit is different. Sometimes you get one working on gpu, but it breaks the others. So docker prevents that. Also one click install... i have no desire to learn about that shit.
It's not. All the steps you declare on a dockerfile you'd need to do natively anyway, the difference is docker also ships an OS so you don't even have to worry where it runs, all the target machine needs is the docker binary. The dev's PC, a VPS, EC2 instance, doesn't matter. It saves you from a lot of headaches.
It solves the "works on my machine" problem by delivering a small machine that the program works on. Believe me, developers are saving you hours of guesswork and downloads by delivering this. >Just run it natively!
It runs natively. Docker containers are not VMs, they just emulate the network and filesystem. They run under the same kernel as your machine (or WSL if you use Windows).
>It solves the "works on my machine" problem by delivering a small machine that the program works on.
technically, it ships the distro and exact OS configuration, not the machine. I know what you meant and I'm being a pedantic autist but it really helps when trying to understand patterns like this:
>hire server for 5 hours >make it use a cuda python docker instead of wasting 1 hour installing the right driver etc. >train >profit
and total retard responses like this:
it takes like ten seconds to figure out the driver you lazy moron.
better yet if you supported open standards and used linux this wouldn't be a problem in the first place
(tard is utterly incapable of extrapolating the implications of the general case when given a simple and obvious example).
Yeah it would be nice if every developer used the exact same distribution and dependencies never ever conflicted under any circumstances, but that's just not reality.
I'm not saying you shouldn't. I prefer to wire everything up myself. But the fact of the matter is that in the real world people have to collaborate they have to meet deadlines and they usually have to take advantage of the shortcuts available.
The fact that you answer with what you'd prefer just confirms you don't actually understand. All you can see is your personal experience.
https://i.imgur.com/JzZsegh.png
>Yeah it would be nice if every developer used the exact same distribution and dependencies never ever conflicted under any circumstances, but that's just not reality.
It can be.
kek NixOS users are exhibit A of ignoring reality to chase an ideal.
It's not. All the steps you declare on a dockerfile you'd need to do natively anyway, the difference is docker also ships an OS so you don't even have to worry where it runs, all the target machine needs is the docker binary. The dev's PC, a VPS, EC2 instance, doesn't matter. It saves you from a lot of headaches.
It doesn't though
Docker definitely adds maintenance headaches and overhead. Even the simple act of locating and reading a log file becomes a chore. Security become a bigger hassle because now instead of just having one OS to deal with, you have the base OS plus the distro in every single container. You have to jump through extra hoops to thwart supply chain attacks (corrupted upstream images). Build+deploy times for simple applications shoot through the roof compared to just copying a binary. It's really easy for devs to ship configurations that "just work" because they've left defaults for shit like database credentials so now your production database's root password is 'password.' Docker tries hard to be a "lightweight VM" so you have all this overlayfs complexity to have writable layers on immutable base images. The nightmare only gets worse if you're mounting NFS filesystems since permissions in the container might not match the network share.
Containers solve real problems but they definitely introduce headaches, too and it's really easy to bloat up because you're just blindly following the herd.
>I know what you meant and I'm being a pedantic autist but it really helps when trying to understand patterns.
I understand Anon, I do this too. I love you.
>Yeah it would be nice if every developer used the exact same distribution and dependencies never ever conflicted under any circumstances, but that's just not reality.
It can be.
>technically, it ships the distro and exact OS configuration, not the machine.
All the machines are x86_64 anyway and the distros are all debian derivatives. Why not just use debian main to begin with?
because they want to over complicate things for no good reason.
programming languages literally solved this problem C and C++ are portable no need to reinvent the wheel over and over.
It's not about portability between systems but library versions. Good luck fixing your entire codebase because protobuf dev decided to change API 10 updates ago but your LTS Ubuntu doesn't provide packages that old in apt while all you want to do is add option to display weight in stones instead of kg.
it takes like ten seconds to figure out the driver you lazy moron.
better yet if you supported open standards and used linux this wouldn't be a problem in the first place
Give them a hand and they'll want an arm etc, can't believe I conceded for a retard like you. YOu just don't understand why docker is useful because you don't have a job. Now kys.
>It makes everything easier at the cost of bloat
Yeah, everything except keeping the stack that runs your shitty app updated. That means outdated libraries with known bugs or lacking support for hardware features or, god forbid, custom patched garbage with no source.
Containers are relevant when you need an environment that breaks the instant when library X isn't specifically the version 1.2.3 and shit like that.
Any decently written software doesn't need containers.
tensorflow is a bitch about which cuda to use, which cudnn to use, and some scripts require certain versions of tensorflow. It's a hassle to match all that stuff instead of just choosing "tensorflow x.x container" and installing it.
one of the major selling points of programming languages was portability
you didn't need to worry about the architecture nor the OS
but you retards keep coming up with more "solutions" to problems that shouldn't even exist in the first place. all because you're rushing for the next big thing and won't settle to improve pre existing technology. so keep on reinventing the wheel fucktards.
I do embedded. Projects wary between esp32 with different versions of IDF, nrf52 needing modern GCC and Chinese arm knock-offs with support dropped 5 years ago. Recently I had to WFH on my own PC. 10 seconds. Installing the correct version of special snowflake arm toolchain and dependecies would take a few hours but docker allowed me to recreate the environment with running one command.
it's almost like no one working in demanding production environments wants to manually replicate, update, and monitor large stacks of complex tooling across tens, maybe even hundreds of servers, and foss project maintainers don't want to have to account for the millions of ways those servers could be set up
Docker is like the install wizard of programming projects.
isn't everything related to ML open anyways?
so no need for "containers" just run the code natively.
Its a program mormons use, old version of soaker.
People use that shit because of all the environment dependencies for different ml shit is different. Sometimes you get one working on gpu, but it breaks the others. So docker prevents that. Also one click install... i have no desire to learn about that shit.
Docker filters retards like yourselves.
thank you for admitting that docker is more a hindrance than a helping hand.
It's not. All the steps you declare on a dockerfile you'd need to do natively anyway, the difference is docker also ships an OS so you don't even have to worry where it runs, all the target machine needs is the docker binary. The dev's PC, a VPS, EC2 instance, doesn't matter. It saves you from a lot of headaches.
It also introduces hundreds of new headaches
It doesn't though
nta, it does its just a silly over engineered solution.
lmao it 100% does
have you tried to give docker your gpu access?
right
>just run the code natively
This can become a huge pain in the ass for many different reasons and is exactly why docker exists.
It solves the "works on my machine" problem by delivering a small machine that the program works on. Believe me, developers are saving you hours of guesswork and downloads by delivering this.
>Just run it natively!
It runs natively. Docker containers are not VMs, they just emulate the network and filesystem. They run under the same kernel as your machine (or WSL if you use Windows).
>It solves the "works on my machine" problem by delivering a small machine that the program works on.
technically, it ships the distro and exact OS configuration, not the machine. I know what you meant and I'm being a pedantic autist but it really helps when trying to understand patterns like this:
and total retard responses like this:
(tard is utterly incapable of extrapolating the implications of the general case when given a simple and obvious example).
Yeah it would be nice if every developer used the exact same distribution and dependencies never ever conflicted under any circumstances, but that's just not reality.
no i understand perfectly fine, i just would rather deal with the messy wiring than use that crap.
I'm not saying you shouldn't. I prefer to wire everything up myself. But the fact of the matter is that in the real world people have to collaborate they have to meet deadlines and they usually have to take advantage of the shortcuts available.
The fact that you answer with what you'd prefer just confirms you don't actually understand. All you can see is your personal experience.
kek NixOS users are exhibit A of ignoring reality to chase an ideal.
Docker definitely adds maintenance headaches and overhead. Even the simple act of locating and reading a log file becomes a chore. Security become a bigger hassle because now instead of just having one OS to deal with, you have the base OS plus the distro in every single container. You have to jump through extra hoops to thwart supply chain attacks (corrupted upstream images). Build+deploy times for simple applications shoot through the roof compared to just copying a binary. It's really easy for devs to ship configurations that "just work" because they've left defaults for shit like database credentials so now your production database's root password is 'password.' Docker tries hard to be a "lightweight VM" so you have all this overlayfs complexity to have writable layers on immutable base images. The nightmare only gets worse if you're mounting NFS filesystems since permissions in the container might not match the network share.
Containers solve real problems but they definitely introduce headaches, too and it's really easy to bloat up because you're just blindly following the herd.
>I know what you meant and I'm being a pedantic autist but it really helps when trying to understand patterns.
I understand Anon, I do this too. I love you.
meds
>Yeah it would be nice if every developer used the exact same distribution and dependencies never ever conflicted under any circumstances, but that's just not reality.
It can be.
>technically, it ships the distro and exact OS configuration, not the machine.
All the machines are x86_64 anyway and the distros are all debian derivatives. Why not just use debian main to begin with?
because they want to over complicate things for no good reason.
programming languages literally solved this problem C and C++ are portable no need to reinvent the wheel over and over.
It's not about portability between systems but library versions. Good luck fixing your entire codebase because protobuf dev decided to change API 10 updates ago but your LTS Ubuntu doesn't provide packages that old in apt while all you want to do is add option to display weight in stones instead of kg.
>hire server for 5 hours
>make it use a cuda python docker instead of wasting 1 hour installing the right driver etc.
>train
>profit
it takes like ten seconds to figure out the driver you lazy moron.
better yet if you supported open standards and used linux this wouldn't be a problem in the first place
You can argue docker is bloat but it isn't a hindrance, that's retarded. It makes everything easier at the cost of bloat.
bloat is hindrance
Give them a hand and they'll want an arm etc, can't believe I conceded for a retard like you. YOu just don't understand why docker is useful because you don't have a job. Now kys.
i do, i just want NONE of it, thank you very much.
ok, I don't give a shit as I'm writing this on a modern browser, have fun in your bloat free life
enjoy your bloat, lazy dev.
>It makes everything easier at the cost of bloat
Yeah, everything except keeping the stack that runs your shitty app updated. That means outdated libraries with known bugs or lacking support for hardware features or, god forbid, custom patched garbage with no source.
Machine Learning is bloated
Containers are relevant when you need an environment that breaks the instant when library X isn't specifically the version 1.2.3 and shit like that.
Any decently written software doesn't need containers.
tensorflow is a bitch about which cuda to use, which cudnn to use, and some scripts require certain versions of tensorflow. It's a hassle to match all that stuff instead of just choosing "tensorflow x.x container" and installing it.
im pretty sure this is just a temporary symptom, hopefully when the AI/ML gold rush is over docker would go back to obscurity.
Docker is used everywhere, not just ML.
uh no?
i only see some (some) webshitters use it besides ML gays.
way to out yourself as codelet, kubernetes and docker are universally used in almost all commercial projects
docker is worse than regular bloat.
you think flatpaks are bad? docker is nothing but pure evil.
its a trojan horse.
Docker and other containers are really only useful as a cloud service, or data science.
one of the major selling points of programming languages was portability
you didn't need to worry about the architecture nor the OS
but you retards keep coming up with more "solutions" to problems that shouldn't even exist in the first place. all because you're rushing for the next big thing and won't settle to improve pre existing technology. so keep on reinventing the wheel fucktards.
I do embedded. Projects wary between esp32 with different versions of IDF, nrf52 needing modern GCC and Chinese arm knock-offs with support dropped 5 years ago. Recently I had to WFH on my own PC. 10 seconds. Installing the correct version of special snowflake arm toolchain and dependecies would take a few hours but docker allowed me to recreate the environment with running one command.
thats cool, good luck trying to solve that mess when you inevitably have to dig down on your own.
Everyone who isn't OpenAI are just kitbashing shit together as fast as possible fishing for VC money.
Doesn't matter everything is fragile as shit, if it works by magic, it's good enough. No need for it to work long term any way.
yea, no long lasting project has a need for containers
The overcomplication of everything.
it's almost like no one working in demanding production environments wants to manually replicate, update, and monitor large stacks of complex tooling across tens, maybe even hundreds of servers, and foss project maintainers don't want to have to account for the millions of ways those servers could be set up
test