Responses inline, I couldn't ignore this post...
I found that Ubuntu does not come with lots of core programming things that are needed. I followed the lazarus free pascal Ubuntu install instructions and although I did each step and got no errors installing everything. When I went to run Lazarus, it gives me an error.
Most OS distributions are User-centric; if you want to develop on a given platform, you have to add support for it.
Also, at least in Ubuntu, things are placed all over the place and many are protected for root and many operations you can't do.
This is to protect you from yourself.
It's real easy to destroy a system otherwise.
My dream OS would be:
The OS is just an OS that you can get in 3 flavors. 1. Command line only: for servers and embedded devices. 2. Command Line with Ascii GUI: for people running servers and or businesses and want to keep employees from playing games. 3. A full OS with command line, command line and ascii GUI and regular modern GUI.
Most Unix-like OS's already provide 1 and 3, while 2 reminds me of Linux in the late 80's. As an aside, without a modern GUI most end-users would not be inclined to play games.
The system can't be touched by no one. It can only be updated from official servers or from dvd's from os source. It would be in its own partition and no access to anyone but from official OS sources.
Sounds like you want Debian.
Drive C: would be the users directory with each user being a root folder in the c root:
c:/kent
c:/john
c:/charles
That would end up being an administrative nightmare, I think.
When you log on as a user your root directory automatically is your directory. So if I log in as kent when I say cd /
it takes me into c:/kent
as kent I don't see any of the other users folders.
When you log on, or start a terminal from a GUI session, you are already in your home folder.
If you're somewhere else in the file system, "cd" by itself takes you back to your home folder.
Only a root user can access any users folder. So some one logging in as root would see this when they type cd /
c:/kent
c:/john
c:/charles
Not sure what the advantage to this would be over doing "cd /Users" (if supported) or "cd /home". In your example, you would also see all of the folders/files that are at the root: /bin /sbin /home /usr /etc
Then there would be another partition that is shared data for all users this would be drive d: for data
Very easy to do if your partition your drives and edit /etc/fstab
When I log in as kent, everything I do to the system settings configurations are stored in my folder that I have full access too.
In Linux, a user's home folder generally has a hidden folder called ".config"
In it, you can over-ride most system-wide settings.
all programs would be self contained in one folder for each program within a folder of the users choice.
Basically you allow the user to control where things are for their tastes and in naming the prefer. And each downloaded and installed packages would be in one folder and not put files all over the place.
This would lead to tremendous duplication on a multi-user system. For example, say you and I use the system. We both want to use OpenOffice. In your scenario, we each would have our own copy in our home folders. Come update time, if it's automated, the system would have to upgrade each copy. If it's not automated, each user is responsible, which now involves downloading the update twice. Pretty soon, we'd be fighting a drive-space issue.
Now the big thing. Each library that has a dynamic dll that is for a major project, submits their latest dynamic library and headers to the OS maker. They in turn in updates install these into the OS system.
That's sort of what happens now, if you think about it. Keep in mind, thought, that all sorts of regression testing has to take place, which is why all distro's may not have the latest and greatest SW/Libraries available.
For a dynamic library a user makes, that would be in their choice of how they want to store dll's and headers.
Then you would have to contend with additional compiler and linker switches, so that your software can find the libraries and headers.
The last big things is that the OS can talk to new hardware, download the interface, dll and header to make that device work from the firmware of the hardware. So any OS designed with this interface to query and get stuff from the firmware can support the hardware.
Hardware is proprietary; it falls to the hardware producer to create the drivers for a given OS. How would you even begin to manage that, since most Linux distributions aren't heavily funded?
Even Microsoft can't do this 100%; it's a logistical and monetary nightmare. Sure, you can go with reference drivers released by the manufacturer, but you would probably want to go with what the Vendor provides instead.
Take Graphic cards, for example. A Vendor may have added an enhancement that the reference drivers don't support. So now, using ATI as an example, you have to provide drivers for just about every conceivable Vendor implementation.
All in all, your ideas are interesting, but (just my opinion) they aren't practical.
AIR.