is lunix really limited to 32000 processes?

 

I've just bought a server with 1.5TB of RAM.  At 1-2MB per process, that would be running a million processes.  That might not make sense with only a few dozen cores, but running more than 2^15, or ~30k, processes, which, as far as I can tell is the current limit in Linux, must be useful sometimes.

According to a part of something calling itself "The Open Group Base Specifications Issue 7" [0] (I'm not sure, but this may be stuff common to both the C and POSIX standards)  :

  • blksize_t, pid_t, and ssize_t shall be signed integer types.

The relevant one is pid_t.  It doesn't place a constraint on the size.  What would break if we had a system with 32 or 64 bit signed pid_t?  (Is it common for client code to make wrong assumptions about this?)

[0] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_types.h.html

What error do we currently get if the system runs out of pids?

My basic idea is that the multi-user nature of the existing Unix design (processes, UIDs, GIDs, filesystem areas, multiple addresses and ports and network interfaces, and so on) should be fully exploited before resorting to solutions like virtualisation, containers and so on.  Modularisation-by-process should also be able to assume the process table space is limited only be genuine resource constraints, not by an arbitrary and low limit.

Comments

Popular posts from this blog

the persistent idiocy of "privileged ports" on Unix

google is giving more and more 500 errors

Guernsey Waste in incorrect bag-rejection horror May 6th, 2024