Andromeda Computer - Blog
Thursday, 27 June 2019 11:25

Learn all about Unix and Linux

BANNERGnulinuxrocks

I get this question quite often, but I struggle explaining it, especially in a few simple words. Anyway, this is a very interesting topic because things are very complicated when it comes to UNIX vs Linux. There are business related things, licenses, policies, government influence etc.

Due to Unix being an operating system and Linux being a kernel, they are different in nature and they have different purposes, they aren’t easily comparable. You can’t summarize the differences and what they are in a single sentence. But don’t worry. After this lesson of history and both of their features and purposes, you will get the “big picture” and everything will be nice and clear. You can jump to the end of the post, in the conclusion, if you want a quick read through.

 

Multix

 Let’s jump to the late 1960s. Personal computers at the time were designed to do single specific tasks. For example, there was a computer for calculating a monthly salary, or a computer to do word processing in a library etc. Each of them was running a program specifically designed for that particular hardware and the task it was meant to do. The programs that were written for one computer vendor (or manufacturer like IBM or Apple) cannot be executed on a computer developed by a different vendor. Those computers cannot handle the execution of multiple programs at a time, but only one. So if a user wanted to listen to some music while writing a document, that was impossible. To overcome those issues the Multics (also known as Multix) operating system was developed. Initially as a collaborative project between MIT, General Electrics and Bell Labs. This is the root, the OS that laid the fundamentals of every new one including Windows, MacOS, Android, Linux-based operating systems and many more.

Multics (Multiplexed Information and Computing Service) is a time-sharing operating system. This means that many programs can share the hardware resources and switch on finite time intervals. In other words, the idea behind time-sharing operating systems is the mechanism that works as follows: 

  • One program is using the hardware (CPU, RAM memory etc.) for some time, let say 20ns (nanoseconds), then it is stopped.
  • Now the hardware resources are available for another program for an equal amount of time, 20ns.

 Due to the very small intervals (very fast switching) there is an illusion that multiple programs are running concurrently. The very same principle is present in every modern operating system.

 In addition to time-sharing capabilities, Multics was designed with the idea of a modular hardware structure and software architecture. It is consistent with many small “building blocks”. Each block can be independently swapped with another one that does the same function but maybe in a different way. The final result is building a system that can grow by time and reusing the blocks instead of reimplementing them. So when there is a hardware change, only a few blocks are being updated, the rest are being reused. If the very same feature is required by multiple programs they can share a common implementation. For example, many programs can share the same implementation for transforming a word to lowercase thus saving time, effort, frustration among developers, and money. Those are the fundamentals of Multics.

Besides all the goodness of Multics, Dennis Ritchie and Ken Thompson (at the time employed in Bell Labs) were not satisfied with all aspects of the project. Mostly by the size and the complexity introduced to achieve the goals. In their spare time they started working on a similar hobby project (actually reimplementation of Multics) named Unics (Uniplexed Information and Computing Service) also known as Unix. As you can see the name Unics is influenced by Multics with the only difference being swapping the word “multiplexed” with “uniplexed”. The reason for this swap is the technical disadvantages of the Unics project at the very beginning. It could not handle the execution of multiple programs simultaneously, only one single program at a time, so uniplexed was used. It is worth mentioning that the Unics project was intended only for internal use inside Bell Labs and it was developed without any organizational backing.

Since we reached the birth of Unics, it’s time for a small recap:

  1. Multics development started in the late 1960s
  2. Multics goals are still valuable as of today. It’s time-sharing (multi-tasking)
  3. Complaints about the size and complexity
  4. In the early 1970s, Unics development begins but on a smaller scale to overcome the disadvantages of Multics. It is a hobby project of Dennis Ritchie and Ken Thompson.

Let’s continue with more details about Unics and its development.

Unix

Unics was initially written in assembly language. Because of this, most of the code was hardcoded for specific hardware and not easily portable to other computers. No better alternative was available at the time.

Meanwhile, the C programming language is released (also by Dennis Ritchie and Ken Thompson). The intention of this programming language is to be used for writing portable programs. It is achieved by requiring a relatively simple compiler, efficiently mapping to machine instructions, requiring minimal run-time support etc. For non-technical people, this is truly amazing.

At this moment in time, there is Unics, but it’s not portable, and there’s a new programming language that offers portability. Sounds like a wise idea – Unics to be rewritten in C. In the mid-1970s, Unics is being rewritten in C, introducing portability, but there was a legal issue preventing public release and wide use.

 

From a business and legal perspective things are quite interesting. There is a giant telecommunication company named AT&T that owns the previously mentioned Bell Labs research center. Due to the nature of the business and how available the technology was at that time, AT&T was considered as a government controlled monopoly. To simplify things, prices of the telecommunication services are controlled by the government so they can not skyrocket high, but also AT&T cannot go to bankruptcy due to the guaranteed income stated by the government. The point is that Bell Labs have a stable source of income (founded by AT&T) and can afford to allocate resources to whatever task they want with little to no worry about the cost. Almost complete freedom which is quite good for a research center.

Because of the monopoly issues and other legal stuff, AT&T was forbidden to enter into the computer market, only telecommunication services were allowed. All they could do was to license the source code of the Unics. It is worth mentioning that the source code is being distributed through other research centers and universities for further research, development, and collaboration, but under the corresponding license terms.

Later, there is a separation between Bell Labs and AT&T. Since the government controlled monopoly is on AT&T, Bell Labs is free after the separation so no legal issues are present anymore.

System V and BSD

By the 1980s, AT&T released a commercial version of Unics named System 5 (known as System V).

In meantime, while AT&T was working on System 5, at Berkeley University of California, the development of previously shared code from Bell Labs continues and very similar Unics operating system is being developed and released as BSD (Berkeley Software Distribution).

It’s time for a recap:

Initial development of Unics is being done at Bell Labs Unics source code is shared among universities and other researches Separation of Bell Labs and AT&T AT&T continues with the development of their own version of Unics named System 5 At Berkeley University of California, development of the previously shared source code is continued and another operating system is released as BSD (Berkeley Software Distribution).

  • So by the mid-80s, we already have two different Unics distros (System 5 and BSD) evolved by their own, but sharing a common predecessor.
  •  There is no such thing as “the real” or “the genuine” Unics operating system. Аs time passes there are even more variants of what was available at those two branches.
  •  HP branches out developing an operating system named HP-UX (Hawelt Packard Unix). Sun branches out with an operating system named Solaris. IBM branches out and continues developing their version named AIX.

It is worth mentioning that all of these branch outs are being done to provide some unique features in order for a given vendor to offer a better product on the market. For example, the networking stack is firstly available on the BSD branch, but later cross-ported to all other branches. Almost every nice feature was cross ported at some time to all other branches. To overcome the issues while cross porting features, and to optimize reusability at a higher level, the POSIX (Portable Operating System Interface X) is being introduced by the IEEE Computer Society in 1988. This is a standard that if followed by the vendors, compatibility between operating systems will be guaranteed, thus programs will be executable on other operating systems with no modifications being required.

Although reusability is present to some degree, the addition of new features requires a lot of work, thus makes development slower and harder. This is due to the inheritance of the terms and conditions of the AT&T license under which the Unics source code was distributed. To eliminate all the legal issues about sharing the source code, people working on the BSD branch started replacing the original source file inherited from AT&T with their own implementation, but releasing it with the BSD license that is more liberal in terms of reusability, modifications, distribution etc. The idea is to release the Unics operating system without any restrictions. Today, this is known as free software. Free as in freedom to study, modify and distribute the modified version without any legal actions against the developer. 

This idea was not welcomed by AT&T, so there was a lawsuit. It turns out that there is no violation of any kind. The trend of replacing files continues and BSD version 4.4 (also known as BSD Lite) was released free from any source code originating from AT&T. 

One more recap: 

  • Many branch outs.
  • POSIX standard
  • It turns out that many features are being cross ported sooner or later.
  • Hard to say what is the “root” or “genuine” Unics operating system anymore. Everything is being branched from the same predecessor and every feature cross ported thus everything is more or less a variation of the same OS.
  • Due to legal issues that come from the contents of the AT&T license, development was hard and redundancy was common.
  • BSD started removing all the files originating from AT&T and providing source files that are free for modification and redistribution.

Now it is time to mention the GNU project.

GNU

gnurules.png

 

(GNU’s Not Unix), a free software, mass collaboration project announced in 1983. Its aim is to provide users freedom and control in their use of their computers and electronic devices.

Can you spot the similar idea with what people behind BSD are doing already?

Both are somehow related to the term free software. but with a very big difference in how free software should be treated and that is obvious by comparing the GPL license (released by GNU) and BSD license. Basically, it comes down to:

  • The BSD License is less restrictive. It says do whatever you want with the source code. No restrictions of any kind.
  • The GPL License is more restrictive but in a good way. It puts emphasis on preventing the use of open source code (GPL licensed) in proprietary closed source applications. It states that if any GPL licensed source code is being used, the source of your code must be released under the same license too. Basically, with the GPL license, you can take whatever you want, but you must give back whatever you produce, thus raising the amount of available free software.
  • As a comparison, the BSD license does not state that whatever is being produced must be released as free software too. It can be released as proprietary closed source software without sharing any of the source code.

In addition to the license, the GNU project is developing a lot of software that is required in order to have a fully functional operating system. Some of their tools are GNU C library, GNU Compiler Collection (GCC), GNOME desktop environment etc. All of which are currently used in popular Linux distros.

Having all this in mind let’s talk about Linux by briefly explaining what it is.

Linux

tux-linux01011101.png

Linux is not an operating system like BSD. Linux is a Kernel.

But what is the difference between a Kernel and an Operating System?

  • An operating system is a collection of many things working as a whole, fully functional, complete product.
  • A kernel is only a piece of the whole operating system.
  • In terms of the Linux kernel, it can be said that it is nothing more than a bunch of drivers. Even though there is a bit more, for this purpose, we will ignore the rest.

Now, what are drivers? A Driver is a program that can handle the utilization of a specific piece of hardware.

Short recap:

  • A Driver is a program that handles the utilization of a specific piece of hardware.
  • Linux is just a bunch of drivers (and something more that will be ignored for now)
  • Linux is a kernel.
  • A Kernel is a piece of an Operating System.

I assume we are all clear by now so we can begin with the Linux history lesson.

Its origins are in Finland in the 1990s, about 20 years later than Unics. Linus Torvalds at that time was a student and was influenced by the GNU license and Minix (Unics based operating system for education). He liked many things about Unics operating systems, but also disliked some of them. As a result, he started working on his own operating system utilizing many of the GNU tools already available. The end result is that Linus developed only a Kernel (the drivers). Sometimes the Linux-based operating systems are referred to as GNU-Linux operating systems because, without GNU tools, the Linux Kernel is useless in real life.

It can be said that Linux, to some point, is just a reimplementation of what was available as the Unics operating system (BSD, System 5…) but with a license that puts more emphasis on keeping the software free by enforcing modifications to be contributed back, thus available for studying, modifications, and further distribution.

The time-sharing capabilities that allow multitasking, the C programing language for providing portability, modular software design that allows swapping a single peace when needed and reusing the rest, and other stuff are inherited from Unics. Those are the fundamentals mentioned at the very beginning of this post. But not sharing any source code with Unix.

It is worth mentioning that Linux was intended to be a small school project. Many computer scientists were interested in trying it out of curiosity.

While Linux was still young, the lawsuit between BSD and AT&T was ongoing. Due to the uncertainty in BSD’s features, many companies that utilized BSD moved to Linux as a very similar alternative with more stable features. Linux was also one single source of code while the BSD source was distributed on many independent branches (BSD, Solaris, HP-UX, AIX etc.)

From the perspective of a company, requiring an operating system for their product (Wi-Fi routers, cable TV boxes etc.) Linux was a better choice. Having a single branch guarantees that all the features merged in the one feature will be available right away. Having a single branch is simpler for maintenance too. On the BSD side, due to the independent development, those new features still required some sort of cross porting which sometimes breaks something else.

This is the historical reason of why Linux gained great popularity even in the early stages of its development, while still not being on pair with BSD and lacking many features.

Unics vs Unix, Multics vs Multix

Did you notice that sometimes the term Unics is used instead of Unix?

The fun fact is that the original name of the project is Unics, but somehow people started calling it Unix. There are many stories about why Unix becomes a popular name, but no one can tell for sure. Now the name Unix is accepted as the official name of the project.

The very same is happening with Multics, with time everyone was calling it Multix even though it was not its official name.

Conclusion – Unix vs Linux

unixhistory.jpg

A timeline of Unix-like OSes

At this moment we know the history and the evolution of the operating systems, we do know why all these branch outs occurred, we do know how government policy can influence things. The whole story can be summarised as:

  • Unix was an operating system back in the 1960s and 1970s while being developed in Bell Labs. With all that branching mentioned above, and the cross porting features between branches, it is simply a chaotic situation and hard to say what is the genuine Unix anymore.
  • It can be said that the most genuine Unix operating systems are System 5 and BSD.
  • System 5 is developed by AT&T as a continuation of the work done at Bell Labs after their separation.
  • The most popular direct ancestor of Unix is the BSD project. It took all the source code from what was developed in Bell Labs, then replaced any source code released under a restrictive license and continued it as free distribution.
  • Other popular distributions of today are Free BSD, Open BSD, Net BSD, but many more are available.
  • Linux, on the other hand, does not share any code with Unix (from Bell Labs), it just follows the same principle of utilizing small building blocks to produce something of bigger value. This is mostly known as writing a small program that does one thing and does it well. Later, those programs are combined with mechanisms known as “pipes” and “redirection”, so the output of one program becomes the input to another program and as the data flows, something of bigger values is achieved as a final result.
  • In terms of Licenses, Unics has a very restrictive license policy when developed. Later, it’s forked under free licenses (BSD). Linux, on the other hand, is using the GPL license from the very beginning.
  • Both are following the POSIX standard so program compatibility is guaranteed.
  • Both are using the same shell for interfacing with the kernel. It is Bash by default.
  • BSD is distributed as a whole system.
  • Linux-based operating systems are made with the Linux Kernel in combination with GNU software and many other smaller utilities that fulfill each other to accomplish the goal.
  • Popular Linux Distributions: Ubuntu, Mint, CentOS, Debian, Fedora, Red Hat, Arch Linux, and many more. There are hundreds of distros nowadays, some of them even optimized for a specific purpose, like gaming or for old computers.

Even though we stated that there is one single source – the Linux Kernel, there are many Linux Distributions (Linux based operating systems). This may be confusing for someone so I will explain this just in case:

Every Linux distribution (distro) ships different versions of the Linux Kernel or the tools, or simply utilizing different building blocks. For example, Ubuntu is using SystemD as an init system, but Slackware is using SysV as the equivalent. There is nothing wrong with both, they do the same thing with some differences and there is a use case when one is better than the other,

Another example is that there are users who prefer to always have the latest version of the software, they use rolling release Linux based operating systems like Arch Linux. Other may prefer a stable environment with no major changes in 5 or more years, Ubuntu LTS (Long Term Support) version is ideal for this use case, which is why is widely used in servers along with CentOS.

As you can see there are even more similarities of both. Linux based operating systems are in the same “chaotic” situation too. There is no such thing as the real or the genuine Linux based operating system. There are many of them, but at least they do share the same source of the Linux Kernel,

It is worth mentioning that programs written for Linux based operating systems or bash commands that are following the POSIX standards can be executed on any Unix based operating systems too. Thus all major software like Firefox, or the GNOME desktop environment is available everywhere without requiring any modifications.

Another fun fact not mentioned before is that even the Mac OS (used in Apple computers) is considered as a BSD derivative. Not every release, but some of them are.

As you can see, in reality, things are even more complicated and interesting.

BannerFinalGNULINUZROCKS

Published in GNU/Linux Rules!

w.jpgRed Hat is, by its very nature, a deviation from the norm in this series of profiles. It is not a company with an open source program, but rather an open source company with an open source and standards office and an engineering team dedicated to curating communities and tending upstream contributions. In essence, Red Hat is a living, breathing testament to the success of open source. However, it still benefited from some organization and goal-setting in its community efforts.

“The Open Source and Standards office, or what some would refer to as an open source program office, was established six years ago to create a consistent way to support communities which Red Hat is actively participating. We created a centralized organization of expertise and resource to support our goals by flanking the considerable upstream engineering efforts ,” explained Deborah Bryant, senior director, Open Source and Standards, in the office of the CTO at Red Hat.

However, there wasn’t any need to advocate open source or push for its adoption internally. Red Hat started from day one as an open source company rather than approaching open source later, so everyone on board was already firmly in the open source camp.

“Most open source program offices are chartered to encourage and enable engineers to contribute to open source, or to educate people on what open source is, or to assist in choosing an open source license. These are things that are a done deal at Red Hat,” says Bryant.

“Rather than just seeing how we can use open source to improve our business, or be more flexible in operational efficiencies, or bringing more money to the bottom line, we are at the level of maturity where open source is our actual business practice and model. And because we work first upstream (in the open source project) of our products first, community success is critical.”

Therefore, the focus is supporting open source projects and the ecosystem rather than on transitioning to open source.

“For us, open source is an important part of our business model, and our business goals are to make sure that those communities that we rely upon are healthy and thriving,” said Bryant.

In Red Hat’s open source toolbox

Having goals is one thing, achieving them is quite another. Several tools can be used to measure progress and results. Red Hat uses a range of tools to make sure to, and communications-based tools top the Red Hat list of must-haves.

“Collaboration tools are a very big deal for us, because we have a high degree of collaboration across engineering and product and business lines. I know I’m probably understating that, but collaboration across Red Hat is huge,” Bryant said.

The company also uses the kinds of open source project, program and community tools you would expect, as well as Kanban boards for organizing tasks.

“A lot of these are developed organically, independently through the communities that we support – they pick the tools that work for them. We use Kanban boards to track progress. We measure using metrics that are established community by community and also in terms of what Red Hat’s hoping to influence through contribution. We use both publicly published metrics and internal metrics for custom boards,” says Bryant.

The team also started using OKRs, or Objectives and Key Results. The framework is used to define and track business objectives and outcomes. Red Hat plans to use OKRs across projects to connect the business side of Red Hat with the work of product managers and engineering to better support long term objectives.

Bryant says that “probably the most essential communications tool we use is IRC.” The acronym stands for Internet Relay Chat and it’s a system used for real-time communications between people anywhere on the planet.

“Most of us are working virtually over five or six or different time zones. IRC is our virtual building, our team is there and collaborating on a conventional level,” she said. “We use a tool called Telegram to do logistics and coordination when we are traveling at big events.”

Measuring Success

At Red Hat, success is defined differently for each open source project.

“When you talk about measuring upstream contributions and such, we actually go through a formal process on an annual basis, and then we refresh it several times a year to define what the success criteria are with the folks here at Red Hat who have the biggest stake in the project,” says Bryant.

“But in other cases, such as Fedora, where we have a lot of Red Hat contributors, we’ve started to measure the number of upstream contributions from other organizations, and not just from our own. For us, healthy ecosystems are a key goal, so we measure our successes partly by measuring how many other contributors there are.”

Dave Neary, a senior principal software engineer working on SDN and NFV in the Open Source and Standards office, added another example in OpenDaylight.

“There is already an ecosystem of companies that contribute to OpenDaylight, and there is a developer team inside Red Hat. Our goal could be to increase the adoption of OpenDaylight as an SDN backend for OpenStack, for example. Or, it could be to increase the awareness of OpenDaylight as an end-to-end network management solution. That is a very different goal, with different stakeholders, and you would measure different things,” he said.

“The goals are going to be different from one project to another. One project may care much more about developing the user community, while another project may care much more about growing a vendor ecosystem.”

Acknowledgements

We would like to thank Dave Neary (senior principal software engineer working on SDN and NFV in the Open Source and Standards office and CTO’s office) and Deb Bryant (senior director, Open Source and Standards, in the office of the CTO at Red Hat) for contributing content to this article, along with Pam Baker who performed the interviews.

Source:linuxfoundation.org

Marielle Price

 

Published in GNU/Linux Rules!
DMC Firewall is developed by Dean Marshall Consultancy Ltd