Andromeda Computer - Blog
Tuesday, 15 October 2019 16:36

Benefits of centralizing GNOME in GitLabs

The GNOME project's decision to centralize on GitLab is creating benefits across the community—even beyond the developers.

 gnomeandro.png

 

“What’s your GitLab?” is one of the first questions I was asked on my first day working for the GNOME Foundation—the nonprofit that supports GNOME projects, including the desktop environment, GTK, and GStreamer. The person was referring to my username on GNOME’s GitLab instance. In my time with GNOME, I’ve been asked for my GitLab a lot.

We use GitLab for basically everything. In a typical day, I get several issues and reference bug reports, and I occasionally need to modify a file. I don’t do this in the capacity of being a developer or a sysadmin. I’m involved with the Engagement and Inclusion & Diversity (I&D) teams. I write newsletters for Friends of GNOME and interview contributors to the project. I work on sponsorships for GNOME events. I don’t write code, and I use GitLab every day.

 

The GNOME project has been managed a lot of ways over the past two decades. Different parts of the project used different systems to track changes to code, collaborate, and share information both as a project and as a social space. However, the project made the decision that it needed to become more integrated and it took about a year from conception to completion. There were a number of reasons GNOME wanted to switch to a single tool for use across the community. External projects touch GNOME, and providing them an easier way to interact with resources was important for the project, both to support the community and to grow the ecosystem. We also wanted to better track metrics for GNOME—the number of contributors, the type and number of contributions, and the developmental progress of different parts of the project.

When it came time to pick a collaboration tool, we considered what we needed. One of the most important requirements was that it must be hosted by the GNOME community; being hosted by a third party didn’t feel like an option, so that discounted services like GitHub and Atlassian. And, of course, it had to be free software. It quickly became obvious that the only real contender was GitLab. We wanted to make sure contribution would be easy. GitLab has features like single sign-on, which allows people to use GitHub, Google, GitLab.com, and GNOME accounts.

We agreed that GitLab was the way to go, and we began to migrate from many tools to a single tool. GNOME board member Carlos Soriano led the charge. With lots of support from GitLab and the GNOME community, we completed the process in May 2018.

There was a lot of hope that moving to GitLab would help grow the community and make contributing easier. Because GNOME previously used so many different tools, including Bugzilla and CGit, it’s hard to quantitatively measure how the switch has impacted the number of contributions. We can more clearly track some statistics though, such as the nearly 10,000 issues closed and 7,085 merge requests merged between June and November 2018. People feel that the community has grown and become more welcoming and that contribution is, in fact, easier.

People come to free software from all sorts of different starting points, and it’s important to try to even out the playing field by providing better resources and extra support for people who need them. Git, as a tool, is widely used, and more people are coming to participate in free software with those skills ready to go. Self-hosting GitLab provides the perfect opportunity to combine the familiarity of Git with the feature-rich, user-friendly environment provided by GitLab.

It’s been a little over a year, and the change is really noticeable. Continuous integration (CI) has been a huge benefit for development, and it has been completely integrated into nearly every part of GNOME. Teams that aren’t doing code development have also switched to using the GitLab ecosystem for their work. Whether it’s using issue tracking to manage assigned tasks or version control to share and manage assets, even teams like Engagement and I&D have taken up using GitLab.

It can be hard for a community, even one developing free software, to adapt to a new technology or tool. It is especially hard in a case like GNOME, a project that recently turned 22. After more than two decades of building a project like GNOME, with so many parts used by so many people and organizations, the migration was an endeavor that was only possible thanks to the hard work of the GNOME community and generous assistance from GitLab.

I find a lot of convenience in working for a project that uses Git for version control. It’s a system that feels comfortable and is familiar—it’s a tool that is consistent across workplaces and hobby projects. As a new member of the GNOME community, it was great to be able to jump in and just use GitLab. As a community builder, it’s inspiring to see the results: more associated projects coming on board and entering the ecosystem; new contributors and community members making their first contributions to the project; and increased ability to measure the work we’re doing to know it’s effective and successful.

It’s great that so many teams doing completely different things (such as what they’re working on and what skills they’re using) agree to centralize on any tool—especially one that is considered a standard across open source. As a contributor to GNOME, I really appreciate that we’re using GitLab.

 BannerFinalGNULINUZROCKS

Published in GNU/Linux Rules!

DevSecOps evolves DevOps to ensure security remains an essential part of the process.

devsecopsandro.jpg 

DevOps is well-understood in the IT world by now, but it's not flawless. Imagine you have implemented all of the DevOps engineering practices in modern application delivery for a project. You've reached the end of the development pipeline—but a penetration testing team (internal or external) has detected a security flaw and come up with a report. Now you have to re-initiate all of your processes and ask developers to fix the flaw.

This is not terribly tedious in a DevOps-based software development lifecycle (SDLC) system—but it does consume time and affects the delivery schedule. If security were integrated from the start of the SDLC, you might have tracked down the glitch and eliminated it on the go. But pushing security to the end of the development pipeline, as in the above scenario, leads to a longer development lifecycle.

This is the reason for introducing DevSecOps, which consolidates the overall software delivery cycle in an automated way.

In modern DevOps methodologies, where containers are widely used by organizations to host applications, we see greater use of Kubernetes and Istio. However, these tools have their own vulnerabilities. For example, the Cloud Native Computing Foundation (CNCF) recently completed a Kubernetes security audit that identified several issues. All tools used in the DevOps pipeline need to undergo security checks while running in the pipeline, and DevSecOps pushes admins to monitor the tools' repositories for upgrades and patches.

 

What Is DevSecOps?

Like DevOps, DevSecOps is a mindset or a culture that developers and IT operations teams follow while developing and deploying software applications. It integrates active and automated security audits and penetration testing into agile application development.

 

To utilize DevSecOps, you need to:

Introduce the concept of security right from the start of the SDLC to minimize vulnerabilities in software code. Ensure everyone (including developers and IT operations teams) shares responsibility for following security practices in their tasks. Integrate security controls, tools, and processes at the start of the DevOps workflow. These will enable automated security checks at each stage of software delivery. DevOps has always been about including security—as well as quality assurance (QA), database administration, and everyone else—in the dev and release process. However, DevSecOps is an evolution of that process to ensure security is never forgotten as an essential part of the process.

 

Understanding the DevSecOps pipeline

There are different stages in a typical DevOps pipeline; a typical SDLC process includes phases like Plan, Code, Build, Test, Release, and Deploy. In DevSecOps, specific security checks are applied in each phase.

 

Plan: Execute security analysis and create a test plan to determine scenarios for where, how, and when testing will be done.

Code: Deploy linting tools and Git controls to secure passwords and API keys.

Build: While building code for execution, incorporate static application security testing (SAST) tools to track down flaws in code before deploying to production. These tools are specific to programming languages.

Test: Use dynamic application security testing (DAST) tools to test your application while in runtime. These tools can detect errors associated with user authentication, authorization, SQL injection, and API-related endpoints.

Release: Just before releasing the application, employ security analysis tools to perform thorough penetration testing and vulnerability scanning.

Deploy: After completing the above tests in runtime, send a secure build to production for final deployment.

 

DevSecOps tools

Tools are available for every phase of the SDLC. Some are commercial products, but most are open source. In my next article, I will talk more about the tools to use in different stages of the pipeline.

DevSecOps will play a more crucial role as we continue to see an increase in the complexity of enterprise security threats built on modern IT infrastructure. However, the DevSecOps pipeline will need to improve over time, rather than simply relying on implementing all security changes simultaneously. This will eliminate the possibility of backtracking or the failure of application delivery.

 

 BannerFinalGNULINUZROCKS

Published in GNU/Linux Rules!

che cloud.png

 

Eclipse Che offers Java developers an Eclipse IDE in a container-based cloud environment.

 

 

In the many, many technical interviews I've gone through in my professional career, I've noticed that I'm rarely asked questions that have definitive answers. Most of the time, I'm asked open-ended questions that do not have an absolutely correct answer but evaluate my prior experiences and how well I can explain things.

One interesting open-ended question that I've been asked several times is:

 

"As you start your first day on a project, what five tools do you install first and why?"

 

There is no single definitely correct answer to this question. But as a programmer who codes, I know the must-have tools that I cannot live without. And as a Java developer, I always include an interactive development environment (IDE)—and my two favorites are Eclipse IDE and IntelliJ IDEA.

 

 

My Java story

When I was a student at the University of Texas at Austin, most of my computer science courses were taught in Java. And as an enterprise developer working for different companies, I have mostly worked with Java to build various enterprise-level applications. So, I know Java, and most of the time I've developed with Eclipse. I have also used the Spring Tools Suite (STS), which is a variation of the Eclipse IDE that is installed with Spring Framework plugins, and IntelliJ, which is not exactly open source, since I prefer its paid edition, but some Java developers favor it due to its faster performance and other fancy features.

Regardless of which IDE you use, installing your own developer IDE presents one common, big problem: "It works on my computer, and I don't know why it doesn't work on your computer."

cohhdhdlin.jpg

 

Because a developer tool like Eclipse can be highly dependent on the runtime environment, library configuration, and operating system, the task of creating a unified sharing environment for everyone can be quite a challenge.

 

But there is a perfect solution to this. We are living in the age of cloud computing, and Eclipse Che provides an open source solution to running an Eclipse-based IDE in a container-based cloud environment.

 

From local development to a cloud environment

I want the benefits of a cloud-based development environment with the familiarity of my local system. That's a difficult balance to find.

When I first heard about Eclipse Che, it looked like the cloud-based development environment I'd been looking for, but I got busy with technology I needed to learn and didn't follow up with it. Then a new project came up that required a remote environment, and I had the perfect excuse to use Che. Although I couldn't fully switch to the cloud-based IDE for my daily work, I saw it as a chance to get more familiar with it.

0_banner.jpg

 

 

Eclipse Che IDE has a lot of excellent features, but what I like most is that it is an open source framework that offers exactly what I want to achieve:

  • Scalable workspaces leveraging the power of cloud
  • Extensible and customizable plugins for different runtimes
  • A seamless onboarding experience to enable smooth collaboration between members

 

Getting started with Eclipse Che

Eclipse Che can be installed on any container-based environment. I run both Code Ready Workspace 1.2 and Eclipse Che 7 on OpenShift, but I've also tried it on top of Minikube and Minishift.

opeenenen.jpg

 

You can also run Che on any container-based environment like OKD, Kubernetes, or Docker. Read the requirement guides to ensure your runtime is compatible with Che:

 

 

For instance, you can quickly install Eclipse Che if you launch OKD locally through Minishift, but make sure to have at least 5GB RAM to have a smooth experience.

There are various ways to install Eclipse Che; I recommend leveraging the Che command-line interface, chectl. Although it is still in an incubator stage, it is my preferred way because it gives multiple configuration and management options. You can also run the installation as an Operator. I decided to go with chectl since I did not want to take on both concepts at the same time. Che's quick-start provides installation steps for many scenarios.

 

Why cloud works best for me

 

Although the local installation of Eclipse Che works, I found the most painless way is to install it on one of the common public cloud vendors.

I like to collaborate with others in my IDE; working collaboratively is essential if you want your application to be something more than a hobby project. And when you are working at a company, there will be enterprise considerations around the application lifecycle of develop, test, and deploy for your application.

Eclipse Che's multi-user capability means each person owns an isolated workspace that does not interfere with others' workspaces, yet team members can still collaborate on application development by working in the same cluster. And if you are considering moving to Eclipse Che for something more than a hobby or testing, the cloud environment's multi-user features will enable a faster development cycle. This includes resource management to ensure resources are allocated to each environment, as well as security considerations like authentication and authorization (or specific needs like OpenID) that are important to maintaining the environment.

Therefore, moving Eclipse Che to the cloud early will be a good choice if your development experience is like mine. By moving to the cloud, you can take advantage of cloud-based scalability and resource flexibility while on the road.

 

Use Che and give back

I really enjoy this new development configuration that enables me to regularly code in the cloud. Open source enables me to do so in an easy way, so it's important for me to consider how to give back. All of Che's components are open source under the Eclipse Public License 2.0 and available on GitHub at the following links:

Consider using Che and giving back—either as a user by filing bug reports or as a developer to help enhance the project.

 

 

 BANNERLINUXFINALESPAOL

 

 

 

 

 

Published in GNU/Linux Rules!
Monday, 30 September 2019 14:02

GNU Debugger: Practical tips

BANNERGnulinuxrocks

Learn how to use some of the lesser-known features of gdb to inspect and fix your code.

  

 

The GNU Debugger (gdb) is an invaluable tool for inspecting running processes and fixing problems while you're developing programs.

You can set breakpoints at specific locations (by function name, line number, and so on), enable and disable those breakpoints, display and alter variable values, and do all the standard things you would expect any debugger to do. But it has many other features you might not have experimented with. Here are five for you to try.

Conditional breakpoints

Setting a breakpoint is one of the first things you'll learn to do with the GNU Debugger. The program stops when it reaches a breakpoint, and you can run gdb commands to inspect it or change variables before allowing the program to continue.

For example, you might know that an often-called function crashes sometimes, but only when it gets a certain parameter value. You could set a breakpoint at the start of that function and run the program. The function parameters are shown each time it hits the breakpoint, and if the parameter value that triggers the crash is not supplied, you can continue until the function is called again. When the troublesome parameter triggers a crash, you can step through the code to see what's wrong.

 

(gdb) break sometimes_crashes

Breakpoint 1 at 0x40110e: file prog.c, line 5.

(gdb) run

[...]

Breakpoint 1, sometimes_crashes (f=0x7fffffffd1bc) at prog.c:5

5 fprintf(stderr,

(gdb) continue

Breakpoint 1, sometimes_crashes (f=0x7fffffffd1bc) at prog.c:5

5 fprintf(stderr,

(gdb) continue

 

To make this more repeatable, you could count how many times the function is called before the specific call you are interested in, and set a counter on that breakpoint (for example, "continue 30" to make it ignore the next 29 times it reaches the breakpoint).

 

But where breakpoints get really powerful is in their ability to evaluate expressions at runtime, which allows you to automate this kind of testing. Enter: conditional breakpoints.

break [LOCATION] if CONDITION

(gdb) break sometimes_crashes if !f

Breakpoint 1 at 0x401132: file prog.c, line 5.

(gdb) run

[...]

Breakpoint 1, sometimes_crashes (f=0x0) at prog.c:5

5 fprintf(stderr,

(gdb)

 

 

Instead of having gdb ask what to do every time the function is called, a conditional breakpoint allows you to make gdb stop at that location only when a particular expression evaluates as true. If the execution reaches the conditional breakpoint location, but the expression evaluates as false, the debugger automatically lets the program continue without asking the user what to do.

 

Breakpoint commands

 

An even more sophisticated feature of breakpoints in the GNU Debugger is the ability to script a response to reaching a breakpoint. Breakpoint commands allow you to write a list of GNU Debugger commands to run whenever it reaches a breakpoint.

We can use this to work around the bug we already know about in the sometimes_crashes function and make it return from that function harmlessly when it provides a null pointer.

We can use silent as the first line to get more control over the output. Without this, the stack frame will be displayed each time the breakpoint is hit, even before our breakpoint commands run.

 

(gdb) break sometimes_crashes

Breakpoint 1 at 0x401132: file prog.c, line 5.

(gdb) commands 1

Type commands for breakpoint(s) 1, one per line.

End with a line saying just "end".

>silent

>if !f

>frame

>printf "Skipping call\n"

>return 0

>continue

>end

>printf "Continuing\n"

>continue

>end

(gdb) run

Starting program: /home/twaugh/Documents/GDB/prog

warning: Loadable section ".note.gnu.property" outside of ELF segments

Continuing

Continuing

Continuing

#0 sometimes_crashes (f=0x0) at prog.c:5

5 fprintf(stderr,

Skipping call

[Inferior 1 (process 9373) exited normally]

(gdb)

 

Dump binary memory

 

GNU Debugger has built-in support for examining memory using the x command in various formats, including octal, hexadecimal, and so on. But I like to see two formats side by side: hexadecimal bytes on the left, and ASCII characters represented by those same bytes on the right.

 

When I want to view the contents of a file byte-by-byte, I often use hexdump -C (hexdump comes from the util-linux package). Here is gdb's x command displaying hexadecimal bytes:

(gdb) x/33xb mydata
0x404040 <mydata>:    0x02    0x01    0x00    0x02    0x00    0x00    0x00    0x01
0x404048 <mydata+8>:    0x01    0x47    0x00    0x12    0x61    0x74    0x74    0x72
0x404050 <mydata+16>:    0x69    0x62    0x75    0x74    0x65    0x73    0x2d    0x63
0x404058 <mydata+24>:    0x68    0x61    0x72    0x73    0x65    0x75    0x00    0x05
0x404060 <mydata+32>:    0x00

 

 

What if you could teach gdb to display memory just like hexdump does? You can, and in fact, you can use this method for any format you prefer.

 

By combining the dump command to store the bytes in a file, the shell command to run hexdump on the file, and the define command, we can make our own new hexdump command to use hexdump to display the contents of memory.

 

 

(gdb) define hexdump

Type commands for definition of "hexdump".

End with a line saying just "end".

>dump binary memory /tmp/dump.bin $arg0 $arg0+$arg1

>shell hexdump -C /tmp/dump.bin

>end

 

 

 

Those commands can even go in the ~/.gdbinit file to define the hexdump command permanently. Here it is in action:

  

(gdb) hexdump mydata sizeof(mydata) 00000000 02 01 00 02 00 00 00 01 01 47 00 12 61 74 74 72 |.........G..attr| 00000010 69 62 75 74 65 73 2d 63 68 61 72 73 65 75 00 05 |ibutes-charseu..| 00000020 00 |.| 00000021





 

Inline disassembly

 

Sometimes you want to understand more about what happened leading up to a crash, and the source code is not enough. You want to see what's going on at the CPU instruction level.

The disassemble command lets you see the CPU instructions that implement a function. But sometimes the output can be hard to follow. Usually, I want to see what instructions correspond to a certain section of source code in the function. To achieve this, use the /s modifier to include source code lines with the disassembly.

 

(gdb) disassemble/s main
Dump of assembler code for function main:
prog.c:
11    {
   0x0000000000401158 <+0>:    push   %rbp
   0x0000000000401159 <+1>:    mov      %rsp,%rbp
   0x000000000040115c <+4>:    sub      $0x10,%rsp

12      int n = 0;
   0x0000000000401160 <+8>:    movl   $0x0,-0x4(%rbp)

13      sometimes_crashes(&n);
   0x0000000000401167 <+15>:    lea     -0x4(%rbp),%rax
   0x000000000040116b <+19>:    mov     %rax,%rdi
   0x000000000040116e <+22>:    callq  0x401126 <sometimes_crashes>
[...snipped...]

 

 

This, along with info registers to see the current values of all the CPU registers and commands like stepi to step one instruction at a time, allow you to have a much more detailed understanding of the program.

Reverse debug

 

Sometimes you wish you could turn back time. Imagine you've hit a watchpoint on a variable. A watchpoint is like a breakpoint, but instead of being set at a location in the program, it is set on an expression (using the watch command). Whenever the value of the expression changes, execution stops, and the debugger takes control.

So imagine you've hit this watchpoint, and the memory used by a variable has changed value. This can turn out to be caused by something that occurred much earlier; for example, the memory was freed and is now being re-used. But when and why was it freed?

The GNU Debugger can solve even this problem because you can run your program in reverse!

It achieves this by carefully recording the state of the program at each step so that it can restore previously recorded states, giving the illusion of time flowing backward.

To enable this state recording, use the target record-full command. Then you can use impossible-sounding commands, such as:

 

 

reverse-step, which rewinds to the previous source line

reverse-next, which rewinds to the previous source line, stepping backward over function calls

reverse-finish, which rewinds to the point when the current function was about to be called

reverse-continue, which rewinds to the previous state in the program that would (now) trigger a breakpoint (or anything else that causes it to stop)

 

Here is an example of reverse debugging in action:

 (gdb) b main
Breakpoint 1 at 0x401160: file prog.c, line 12.
(gdb) r
Starting program: /home/twaugh/Documents/GDB/prog
[...]

Breakpoint 1, main () at prog.c:12
12      int n = 0;
(gdb) target record-full
(gdb) c
Continuing.

Program received signal SIGSEGV, Segmentation fault.
0x0000000000401154 in sometimes_crashes (f=0x0) at prog.c:7
7      return *f;
(gdb) reverse-finish
Run back to call of #0  0x0000000000401154 in sometimes_crashes (f=0x0)
        at prog.c:7
0x0000000000401190 in main () at prog.c:16
16      sometimes_crashes(0);

 

 

These are just a handful of useful things the GNU Debugger can do. There are many more to discover. Which hidden, little-known, or just plain amazing feature of gdb is your favorite? Please share it in the comments.

BannerFinalGNULINUZROCKS

Published in GNU/Linux Rules!

BANNERGnulinuxrocks

Access your Android device from your PC with this open source application based on scrcpy.

 

 

In the future, all the information you need will be just one gesture away, and it will all appear in midair as a hologram that you can interact with even while you're driving your flying car. That's the future, though, and until that arrives, we're all stuck with information spread across a laptop, a phone, a tablet, and a smart refrigerator. Unfortunately, that means when we need information from a device, we generally have to look at that device.

While not quite holographic terminals or flying cars, guiscrcpy by developer Srevin Saju is an application that consolidates multiple screens in one location and helps to capture that futuristic feeling.

Guiscrcpy is an open source (GNU GPLv3 licensed) project based on the award-winning scrcpy open source engine. With guiscrcpy, you can cast your Android screen onto your computer screen so you can view it along with everything else. Guiscrcpy supports Linux, Windows, and MacOS.

Unlike many scrcpy alternatives, Guiscrcpy is not a fork of scrcpy. The project prioritizes collaborating with other open source projects, so Guiscrcpy is an extension, or a graphical user interface (GUI) layer, for scrcpy. Keeping the Python 3 GUI separate from scrcpy ensures that nothing interferes with the efficiency of the scrcpy backend. You can screencast up to 1080p resolution and, because it uses ultrafast rendering and surprisingly little CPU, it works even on a relatively low-end PC.

Scrcpy, Guiscrcpy's foundation, is a command-line application, so it doesn't have GUI buttons to handle gestures, it doesn't provide a Back or Home button, and it requires familiarity with the Linux terminal. Guiscrcpy adds GUI panels to scrcpy, so any user can run it—and cast and control their device—without sending any information over the internet. Everything works over USB or WiFi (using only a local network). Guiscrcpy also adds a desktop launcher to Linux and Windows systems and provides compiled binaries for Linux and Windows.

 

Installing Guiscrcpy

Before installing Guiscrcpy, you must install its dependencies, most notably scrcpy. Possibly the easiest way to install scrcpy is with snap, which is available for most major Linux distributions. If you have snap installed and active, then you can install scrcpy with one easy command:

 

$ sudo snap install scrcpy

 

While it's installing, you can install the other dependencies. The Simple DirectMedia Layer (SDL 2.0) toolkit is required to display and interact with the phone screen, and the Android Debug Bridge (adb) command connects your computer to your Android phone.

On Fedora or CentOS:

 

 

$ sudo dnf install SDL2 android-tools

 

On Ubuntu or Debian:

 

$ sudo apt install SDL2 android-tools-adb

 

In another terminal, install the Python dependencies:

 

$ python3 -m pip install -r requirements.txt --user

 

Setting up your phone

 

For your phone to accept an adb connection, it must have Developer Mode enabled. To enable Developer Mode on Android, go to Settings and select About phone. In About phone, find the Build number (it may be in the Software information panel). Believe it or not, to enable Developer Mode, tap Build number seven times in a row.

 

developer-mode.jpg

 

For full instructions on all the many ways you can configure your phone for access from your computer, read the Android developer documentation.

Once that's set up, plug your phone into a USB port on your computer (or ensure that you've configured it correctly to connect over WiFi).

 

Using guiscrcpy

When you launch guiscrcpy, you see its main control window. In this window, click the Start scrcpy button. This connects to your phone, as long as it's set up in Developer Mode and connected to your computer over USB or WiFi.

 guiscrcpy-main.png

 

It also includes a configuration-writing system, where you can write a configuration file to your ~/.config directory to preserve your preferences between uses.

The bottom panel of guiscrcpy is a floating window that helps you perform basic controlling actions. It has buttons for Home, Back, Power, and more. These are common functions on Android devices, but an important feature of this module is that it doesn't interact with scrcpy's SDL, so it can function with no lag. In other words, this panel communicates directly with your connected device through adb rather than scrcpy.

guiscrcpy-bottompanel.png

 

The project is in active development and new features are still being added. The latest build has an interface for gestures and notifications.

With guiscrcpy, you not only see your phone on your screen, but you can also interact with it, either by clicking the SDL window itself, just as you would tap your physical phone, or by using the buttons on the panels.

 guiscrcpy-screenshot.jpg

 

Guiscrcpy is a fun and useful application that provides features that ought to be official features of any modern device, especially a platform like Android. Try it out yourself, and add some futuristic pragmatism to your present-day digital life.

 

BannerFinalGNULINUZROCKS 

Published in GNU/Linux Rules!

Deepin OS is one of the most modern-looking Linux distros. If you are a fan of sleek design and at the same time easy-to-use distro, then get your hands on Deepin. It is also extremely easy to install. I am sure you’ll love it.

The team has developed its own desktop environment based on Qt and also uses KDE plasma’s window manager aka. dde-kwin. Deepin team has also developed 30 native applications for users to make day-to-day tasks easier to complete.

Some of the native deepin applications are — Deepin installer, Deepin file manager, Deepin system monitor, Deepin Store, Deepin screen recorder, Deepin cloud print, and so on… If you ever run out of options, do not forget thousands of open source applications are also available in the store.

The development of Deepin started in 2004 under the name ‘Hiwix’ and it’s been active since then. The distro’s name was changed multiple times but the motto remained the same, provide a stable operating system which is easy to install and use.

The current version Deepin OS 15.11 is based on Debian stable branch. It was released on 19, July 2019 with some great features and many improvements and bug fixes.

 

 

 

 

Cloud sync

The most notable feature in this release is Cloud sync. This feature is useful if you have multiple machines running Deepin or you have to reset your deepin installation more often than others. The distro will keep your system settings in sync with cloud storage as soon as you sign on. In case the installation is reset, the settings can be quickly imported from the cloud. This feature will sync all the system settings such as themes, sound settings, update settings, wallpaper, dock, power settings, etc. Unfortunately, the cloud sync feature is only available for users with Deepin ID in mainland china.

They are testing the feature and will be releasing it soon for the rest of the Deepin users. There are other user-friendly focused Linux distributions which should develop this feature. The cloud sync is useful for new Linux users. They don’t have to set up everything from scratch if they mess things up with the current installation.

 

 

dde-kwin

Deepin switched from dde-wm to dde-kwin in 15.10. ddep-kwin consumes less memory and provides a faster and better user experience. Deepin 15.11 brings more stability to dde-kwin. Deepin Store The deepin team has developed 30 native applications, among those is Deepin store. It lets you easily browse and install applications from the distro repositories. The new release ships with Deepin Store 5.3. The updated store app can now determine the user region based on Deepin ID’s location. Another option has been added to the deepin file manager for burning files to CD/DVD. Though CD/DVD is the thing of the past but still if somebody needs to burn data to it, it’s extremely easy to do it in Deepin.

To play media, the distro ships with Deepin movie application which now supports drag-n-drop load subtitle feature. Just drag the subtitle script and drop it on the player while the movie is playing. Besides these new features, there are more improvements and bug fixes in Deepin 15.11. For people finding out a beautiful, feature-rich and stable Linux distribution, Deepin can be the platform of your choice.



BannerFinalGNULINUZROCKS
Published in GNU/Linux Rules!
Friday, 16 August 2019 19:35

GNOME desktop: Best extensions

BANNERGnulinuxrocks

 

Add functionality and features to your Linux desktop with these add-ons.

 

The GNOME desktop is the default graphical user interface for most of the popular Linux distributions and some of the BSD and Solaris operating systems. Currently at version 3, GNOME provides a sleek user experience, and extensions are available for additional functionality. We've covered GNOME extensions before, but to celebrate GNOME's 22nd anniversary, I decided to revisit the topic. Some of these extensions may already be installed, depending on your Linux distribution; if not, check your package manager.

 

 How to add extensions from the package manager
To install extensions that aren't in your distro, open the package manager and click Add-ons. Then click Shell Extensions at the top-right of the Add-ons screen, and you will see a button for Extension Settings and a list of available extensions.

  

To install extensions that aren't in your distro, open the package manager and clic Then clic Shell Extensions at the top-right of the Add-ons screen, and you will see a button for Extension Settings and a list of available extensions.

add-onsextensions_6.png

 

1. GNOME Clocks

GNOME Clocks is an application that includes a world clock, alarm, stopwatch, and timer. You can configure clocks for different geographic locations. For example, if you regularly work with colleagues in another time zone, you can set up a clock for their location. You can access the World Clocks section in the top panel's drop-down menu by clicking the system clock. It shows your configured world clocks (not including your local time), so you can quickly check the time in other parts of the world.

 

 

2. GNOME Weather

GNOME Weather displays the weather conditions and forecast for your current location. You can access local weather conditions from the top panel's drop-down menu. You can also check the weather in other geographic locations using Weather's Places menu.

 clocksweatherdropdown_6.png

 

GNOME Clocks and Weather are small applications that have extension-like functionality. Both are installed by default on Fedora 30 (which is what I'm using). If you're using another distribution and don't see them, check the package manager. You can see both extensions in action in the image below.

clocksweatherdropdown_6.png

3. Applications Menu
I think the GNOME 3 interface is perfectly enjoyable in its stock form, but you may prefer a traditional application menu. In GNOME 30, the Applications Menu extension was installed by default but not enabled. To enable it, click the Extensions Settings button in the Add-ons section of the package manager and enable the Applications Menu extension.

add-onsextensionsettings_6.png

Now you can see the Applications Menu in the top-left corner of the top panel.

 applicationsmenuextension_5.png

4. More columns in applications view
The Applications view is set by default to six columns of icons, probably because GNOME needs to accommodate a wide array of displays. If you're using a wide-screen display, you can use the More columns in applications menu extension to increase the columns. I find that setting it to eight makes better use of my screen by eliminating the empty columns on either side of the icons when I launch the Applications view.

Add system info to the top panel
The next three extensions provide basic system information to the top panel.

5. Harddisk LED shows a small hard drive icon with input/output (I/O) activity.
6. Load Average indicates Linux load averages taken over three time intervals.
7. Uptime Indicator shows system uptime; when it's clicked, it shows the date and time the system was started.

 

8. Sound Input and Output Device Chooser
Your system may have more than one audio device for input and output. For example, my laptop has internal speakers and sometimes I use a wireless Bluetooth speaker. The Sound Input and Output Device Chooser extension adds a list of your sound devices to the System Menu so you can quickly select which one you want to use.

9. Drop Down Terminal
Fellow writer Scott Nesbitt recommended the next two extensions. The first, Drop Down Terminal, enables a terminal window to drop down from the top panel by pressing a certain key; the default is the key above Tab; on my keyboard, that's the tilde (~) character. Drop Down Terminal has a settings menu for customizing transparency, height, the activation keystroke, and other configurations.

10. Todo.txt
Todo.txt adds a menu to the top panel for maintaining a file for Todo.txt task tracking. You can add or delete a task from the menu or mark it as completed.

todo.txtmenu_3.png

11. Removable Drive Menu
the editor Seth Kenlon suggested Removable Drive Menu. It provides a drop-down menu for managing removable media, such as USB thumb drives. From the extension's menu, you can access a drive's files and eject it. The menu only appears when removable media is inserted.

removabledrivemenu_3.png

 

12. GNOME Internet Radio
I enjoy listening to internet radio streams with the GNOME Internet Radio extension, which I wrote about in How to Stream Music with GNOME Internet Radio.

 

please read more at opensource.com

 

BannerFinalGNULINUZROCKS 

Published in GNU/Linux Rules!

BANNERGnulinuxrocks

 

Linux is fully capable of running not weeks, but years, without a reboot. In some industries, that’s exactly what Linux does, thanks to advances like kpatch and kgraph.

For laptop and desktop users, though, that metric is a little extreme. While it may not be a day-to-day reality, it’s at least a weekly reality that sometimes you have a good reason to reboot your machine. And for a system that doesn’t need rebooting often, Linux offers plenty of choices for when it’s time to start over.

 

 

Understand your options

Before continuing though, a note on rebooting. Rebooting is a unique process on each operating system. Even within POSIX systems, the commands to power down and reboot may behave differently due to different initialization systems or command designs.

Despite this factor, two concepts are vital. First, rebooting is rarely requisite on a POSIX system. Your Linux machine can operate for weeks or months at a time without a reboot if that’s what you need. There’s no need to "freshen up" your computer with a reboot unless specifically advised to do so by a software installer or updater. Then again, it doesn’t hurt to reboot, either, so it’s up to you.

Second, rebooting is meant to be a friendly process, allowing time for programs to exit, files to be saved, temporary files to be removed, filesystem journals updated, and so on. Whenever possible, reboot using the intended interfaces, whether in a GUI or a terminal. If you force your computer to shut down or reboot, you risk losing unsaved and even recently-saved data, and even corrupting important system information; you should only ever force your computer off when there’s no other option.

 

 

Click the button

The first way to reboot or shut down Linux is the most common one, and the most intuitive for most desktop users regardless of their OS: It’s the power button in the GUI. Since powering down and rebooting are common tasks on a workstation, you can usually find the power button (typically with reboot and shut down options) in a few different places. On the GNOME desktop, it's in the system tray: 

gnome-menu-power.jpg

It’s also in the GNOME Activities menu:

gnome-screen-power.jpg

On the KDE desktop, the power buttons can be found in the Applications menu:

kde-menu-power.jpg

You can also access the KDE power controls by right-clicking on the desktop and selecting the Leave option, which opens the window you see here:

kde-screen-power.jpg

Other desktops provide variations on these themes, but the general idea is the same: use your mouse to locate the power button, and then click it. You may have to select between rebooting and powering down, but in the end, the result is nearly identical: Processes are stopped, nicely, so that data is saved and temporary files are removed, then data is synchronized to drives, and then the system is powered down.

 

 

Push the physical button

Most computers have a physical power button. If you press that button, your Linux desktop may display a power menu with options to shut down or reboot. This feature is provided by the Advanced Configuration and Power Interface (ACPI) subsystem, which communicates with your motherboard’s firmware to control your computer’s state.

ACPI is important but it’s limited in scope, so there’s not much to configure from the user’s perspective. Usually, ACPI options are generically called Power and are set to a sane default. If you want to change this setup, you can do so in your system settings.

On GNOME, open the system tray menu and select Activities, and then Settings. Next, select the Power category in the left column, which opens the following menu:

gnome-settings-power.jpg

In the Suspend & Power Button section, select what you want the physical power button to do.

The process is similar across desktops. For instance, on KDE, the Power Management panel in System Settings contains an option for Button Event Handling.

 

kde-power-management.jpg

After you configure how the button event is handled, pressing your computer’s physical power button follows whatever option you chose. Depending on your computer vendor (or parts vendors, if you build your own), a button press might be a light tap, or it may require a slightly longer push, so you might have to do some tests before you get the hang of it.

Beware of an over-long press, though, since it may shut your computer down without warning.

 

 

Run the systemctl command

If you operate more in a terminal than in a GUI desktop, you might prefer to reboot with a command. Broadly speaking, rebooting and powering down are processes of the init system—the sequence of programs that bring a computer up or down after a power signal (either on or off, respectively) is received.

On most modern Linux distributions, systemd is the init system, so both rebooting and powering down can be performed through the systemd user interface, systemctl. The systemctl command accepts, among many other options, halt (halts disk activity but does not cut power) reboot (halts disk activity and sends a reset signal to the motherboard) and poweroff (halts disk acitivity, and then cut power). These commands are mostly equivalent to starting the target file of the same name.

For instance, to trigger a reboot:

sudo systemctl start reboot.target

 

Run the shutdown command

Traditional UNIX, before the days of systemd (and for some Linux distributions, like Slackware, that’s now), there were commands specific to stopping a system. The shutdown command, for instance, can power down your machine, but it has several options to control exactly what that means.

This command requires a time argument, in minutes, so that shutdown knows when to execute. To reboot immediately, append the -r flag:

sudo shutdown -r now

To power down immediately:

sudo shutdown -P now

Or you can use the poweroff command:

poweroff

To reboot after 10 minutes:

sudo shutdown -r 10

The shutdown command is a safe way to power off or reboot your computer, allowing disks to sync and processes to end. This command prevents new logins within the final 5 minutes of shutdown commencing, which is particularly useful on multi-user systems.

On many systems today, the shutdown command is actually just a call to systemctl with the appropriate reboot or power off option.

 

Run the reboot command

The reboot command, on its own, is basically a shortcut to shutdown -r now. From a terminal, this is the easiest and quickest reboot command:

sudo reboot

If your system is being blocked from shutting down (perhaps due to a runaway process), you can use the --force flag to make the system shut down anyway. However, this option skips the actual shutting down process, which can be abrupt for running processes, so it should only be used when the shutdowncommand is blocking you from powering down.

On many systems, reboot is actually a call to systemctl with the appropriate reboot or power off option.

 

Init

On Linux distributions without systemd, there are up to 7 runlevels your computer understands. Different distributions can assign each mode uniquely, but generally, 0 initiates a halt state, and 6 initiates a reboot (the numbers in between denote states such as single-user mode, multi-user mode, a GUI prompt, and a text prompt).

These modes are defined in /etc/inittab on systems without systemd. On distributions using systemd as the init system, the /etc/inittab file is either missing, or it’s just a placeholder.

The telinit command is the front-end to your init system. If you’re using systemd, then this command is a link to systemctl with the appropriate options.

To power off your computer by sending it into runlevel 0:

sudo telinit 0

To reboot using the same method:

sudo telinit 6

How unsafe this command is for your data depends entirely on your init configuration. Most distributions try to protect you from pulling the plug (or the digital equivalent of that) by mapping runlevels to friendly commands.

You can see for yourself what happens at each runlevel by reading the init scripts found in /etc/rc.d or /etc/init.d, or by reading the systemd targets in /lib/systemd/system/.

 

Apply brute force

So far I’ve covered all the right ways to reboot or shut down your Linux computer. To be thorough, I include here additional methods of bringing down a Linux computer, but by no means are these methods recommended. They aren’t designed as a daily reboot or shut down command (reboot and shutdown exist for that), but they’re valid means to accomplish the task.

If you try these methods, try them in a virtual machine. Otherwise, use them only in emergencies.

 

 

Proc

A step lower than the init system is the /proc filesystem, which is a virtual representation of nearly everything happening on your computer. For instance, you can view your CPUs as though they were text files (with cat /proc/cpuinfo), view how much power is left in your laptop’s battery, or, after a fashion, reboot your system.

There’s a provision in the Linux kernel for system requests (Sysrq on most keyboards). You can communicate directly with this subsystem using key combinations, ideally regardless of what state your computer is in; it gets complex on some keyboards because the Sysrq key can be a special function key that requires a different key to access (such as Fn on many laptops).

An option less likely to fail is using echo to insert information into /proc, manually. First, make sure that the Sysrq system is enabled:

sudo echo 1 > /proc/sys/kernel/sysrq

To reboot, you can use either Alt+Sysrq+B or type:

sudo echo b > /proc/sysrq-trigger

This method is not a reasonable way to reboot your machine on a regular basis, but it gets the job done in a pinch.

 

Sysctl

Kernel parameters can be managed during runtime with sysctl. There are lots of kernel parameters, and you can see them all with sysctl --all. Most probably don’t mean much to you until you know what to look for, and in this case, you’re looking for kernel.panic.

You can query kernel parameters using the -–value option:

sudo sysctl --value kernel.panic

 

If you get a 0 back, then the kernel you’re running has no special setting, at least by default, to reboot upon a kernel panic. That situation is fairly typical since rebooting immediately on a catastrophic system crash makes it difficult to diagnose the cause of the crash. Then again, systems that need to stay on no matter what might benefit from an automatic restart after a kernel failure, so it’s an option that does get switched on in some cases.

You can activate this feature as an experiment (if you’re following along, try this in a virtual machine rather than on your actual computer

 

sudo sysctl kernel.reboot=1

 

Now, should your computer experience a kernel panic, it is set to reboot instead of waiting patiently for you to diagnose the problem. You can test this by simulating a catastrophic crash with sysrq. First, make sure that Sysrq is enabled:

 

sudo echo 1 > /proc/sys/kernel/sysrq

And then simulate a kernel panic:

sudo echo c > /proc/sysrq-trigger

Your computer reboots immediately.

 

Reboot responsibly

Knowing all of these options doesn't mean that you should use them all. Give careful thought to what you're trying to accomplish, and what the command you've selected will do. You don't want to damage your system by being reckless. That's what virtual machines are for. However, having so many options means that you're ready for most situations.

Have I left out your favorite method of rebooting or powering down a system? List what I’ve missed in the comments!

 

BOTONFACEBOOK COMMUNITY BOTONTELEGRAMCHANNEL

 Source: opensource.com Please visit and support the linux project.

BANNERLINUXFINALESPAOL

Published in GNU/Linux Rules!

BANNERGnulinuxrocks

 

What is a multi-user Operating system ? When the OS allows multiple people to use the computer at the same time without affecting other's stuff, it becomes a multi-user OS. Like wise Linux is also belongs to above mentioned category. There can be having multiple users, groups with their own personal files and preferences. So, this article will be helpful for you in below actions.

 

  • Managing Users ( Create/Edit/Delete accounts, Suspend accounts )
  • Manage User's Passwords ( Set Password policies, Expiration, further modifications )
  • Manage Groups ( Create/Delete user groups )

 

From this article we will discuss mostly useful Linux commands with their syntax's.


How to create a user

 

1) useradd : Add a user

 

syntax : useradd 

eg : We will create a user named ""Jesica". The command is useradd jesica . First i switch to root user with sudo su command as i am a sudo user.

UGLINUX1.png

You can see when we created the user in root account, it just added the user without asking the password for the newly created user. So now we will create a password for the user jesica.

 

 

2) passwd : set a password for users

 

syntax : passwd 

UGLINUX2.png

Here, i set a password for jesica. I set the password also as "jesica".You can use your own. The password you are writing will not be displayed for security reasons. As my password only having 6 characters, we get a message saying password is shorter than 8 characters. Those are password policies. We will discuss later in this article.

 

* Now we have created a new user with command useradd and set a password with passwd command. This is done in CentOS. But in some other linux distributions, adduser command will be used instead of useradd.

 * If you are a normal user, you have to be a super user to add a new user. So you have to use the commands as sudo useradd and sudo passwd .

 

Where all of these users are residing ?

We discussed these stuff in "Linux File System Hierarchy" article. As /root directory is root user's home directory, normal user's home directory is /home. Inside of /home directory all the user's profiles are stored. You can use the command ls /home to check who are currently in your OS. Check the below image, which shows my users in my OS.

 UGLINUX3.png

 

 

What is /etc/passwd file ?

 

When you created a user with command useradd without any options, there are some configuration file which are changing. Those are as below

 

  1. /etc/passwd
  2. /etc/shadow
  3. /etc/groups
  4. /etc/gshadow

 

Output of the above files are as below according to my OS.

 

1. /etc/passwd file

UGLINUX4.png

 

 

2. /etc/shadow file

UGLINUX5.png 

 

3. /etc/group file

UGLINUX6.png

 

When we created a new user with useradd command without any options, /etc/passwd file sets reasonable defaults for all field in that file for the new user. It is just a text file which contains useful information about the users like username, user id, group id, user's home directory path, shell and etc.

 

If we discuss about the fields in /etc/passwd file, eg : student:x:1000:1000:student:/home/student:/bin/bash

 

1. student : This is the username. To login we use this name.

 

2. x : This is the password. This is an encrypted password stored in /etc/shadow file. You can see the password record in /etc/shadow file for user student in the above image.

 

3. 1000 : This is the user id. Each an every user should have UID. This is zero for root user and 1-99 is for predefined user accounts and 100-999 is for system administrative accounts. Normal users are having User IDs starting from 1000. Extra - Also you can use command id for viewing user details.

 

4. 1000 : Primary group ID ( GID ). see /etc/group file on left side.

5. student : Comment field

6. /home/student : User's home directory

7. /bin/bash : The shell used by the user

 

 

* Summary of the above

 

  • When a user created, new profile will be created in /home/username by default
  • Hidden files like .bashrc , .bash_profile , .bash_logout will be copied to user's home directory. Environmental variables for the user is set by those hidden files and they will be covered in future articles.
  • A separate groups will be created for each user with their name.

 

Useradd command with some options

 

1.) If accidentally user's home directory is not created with useradd command.

 UGLINUX7.png

 

If you want to create a user without the home directory, useradd -M panda.

 

2.) If you want to move your home directory to a separate directory

UGLINUX8.png 

 

In the above command you have to use useradd command and then -d option for changing the default home directory path and /boo is the new home directory. Last put the username. You can see the below image. /etc/passwd file has a different home directory entry for user boo, Because we changed it's home directory.

UGLINUX9.png 

 

3.) Add a comment for the user when adding

UGLINUX10.png 

 

In /etc/passwd file :

UGLINUX11.png 

 

 

4.) Create a user by your own UID, useradd -u

5.) Create a user by your own UID and GID, useradd -u -g

6.) Create a user adding to a different groups, useradd -G There groups can be one or more and should be separated with a comma (,) the groups.

7.) To create a user, but disable shell login useradd -s /sbin/nologin With the above command, we can disable shell interaction with the user. But the account is active.


 

How to remove an account

 

3. userdel : Remove a user

 

syntax : userdel

 

eg : userdel -r

 

* When deleting the user, go with option -r. Why is it ? With -r option, it removes user with it's home directory. If removed without -r option, user's home directory will not be deleted.


 

How to modify an user account

 

4. usermod : Modify a user

 

syntax : usermod

 

* Here we can use all the options used in useradd command. Below are some options which is not discussed above.


 

1.) How to change the user's name

 

usermod -l

 

2.) To lock a user

 

usermod -L

 

3.) To unlock a user

 

usermod -U

 

4.) To change the group of a user

 

usermod -G

 

5.) To append a group to a user

 

usermod -aG

 

* Here appending means adding groups without removing the already existing groups. But if we use without -a, it removes the existing groups and join to new groups. This is relevant under primary groups and supplementary groups.

 

What is a group ?

 

Group is a collection of one or more users in Linux OS. Same as users, groups also have a group name and a id ( GID ). The group details can be found in /etc/group file. There are two types of main groups in Linux OS. Those are Primary groups and Supplementary groups. Every user once created is getting a new groups with the user's account name. That is the primary group and Supplementary groups are groups having one or more users inside.


 

How to create a group

 

4. groupadd : create a linux group

 

syntax : groupadd

 

Few examples

 

1.) To create a group named "student"

 

groupadd student

 

2.) Define a different group id ( GID )

 

groupadd -g 5000 student


 

How to modify an existing group

 

5. groupmod : modify a group

 

syntax : groupmod <options> <group name>

 

To change the name of the group, groupmod -n To change the group if, groupmod -g

 


 

How to delete an existing group

 

6. groupdel : delete a group

 

syntax : groupdel <group name>


 

How to manage user passwords using password policy ?

 

As we discussed above, while /etc/passwd file stores user details, /etc/shadow file stores user's password details. I attached an image of /etc/shadow file in the above. Here we use a term named Password aging. From that we use command chage edit the password aging policy. Look at the below image.

 UGLINUX12.png

Refer the above image and the options are as below.

 

  • chage -d 0 : Forcefully request the user to change the password in the next login.
  • chage -E Year-Month-Date : To expire an user account ( It should be in format YYYY-MM-DD ) 
  • chage -M 90 : Set password policy for requesting password should be renewed in every 90 days
  • chage -m 7 : Minimum days should be 7 to wait for changing the password again.

 

* Inactive days are set to define from how many days the account will be kept inactive after password expiration. If the user didn't change the password within inactive period, the account will be expired. 

 

chage -l : To display user's current settings for password policy.

 

The default values for all of the above values ( password expiration days, inactive days and etc ) will be in the configuration file, /etc/login.defs text file. Including User account ID , Group Account ID configurations also can be seen there. You can change the values in the /etc/login.defs file as your requirement.

UGLINUX13.png 

 

Now you have learned mostly needed stuff in Linux Users and Groups. This is not a small topic. There are a lots of commands you need to refer under this topic.

 

 You can see our previous posts with related topics

 

 

 

 

 

BannerFinalGNULINUZROCKS

Published in GNU/Linux Rules!
Thursday, 27 June 2019 11:25

Learn all about Unix and Linux

BANNERGnulinuxrocks

I get this question quite often, but I struggle explaining it, especially in a few simple words. Anyway, this is a very interesting topic because things are very complicated when it comes to UNIX vs Linux. There are business related things, licenses, policies, government influence etc.

Due to Unix being an operating system and Linux being a kernel, they are different in nature and they have different purposes, they aren’t easily comparable. You can’t summarize the differences and what they are in a single sentence. But don’t worry. After this lesson of history and both of their features and purposes, you will get the “big picture” and everything will be nice and clear. You can jump to the end of the post, in the conclusion, if you want a quick read through.

 

Multix

 Let’s jump to the late 1960s. Personal computers at the time were designed to do single specific tasks. For example, there was a computer for calculating a monthly salary, or a computer to do word processing in a library etc. Each of them was running a program specifically designed for that particular hardware and the task it was meant to do. The programs that were written for one computer vendor (or manufacturer like IBM or Apple) cannot be executed on a computer developed by a different vendor. Those computers cannot handle the execution of multiple programs at a time, but only one. So if a user wanted to listen to some music while writing a document, that was impossible. To overcome those issues the Multics (also known as Multix) operating system was developed. Initially as a collaborative project between MIT, General Electrics and Bell Labs. This is the root, the OS that laid the fundamentals of every new one including Windows, MacOS, Android, Linux-based operating systems and many more.

Multics (Multiplexed Information and Computing Service) is a time-sharing operating system. This means that many programs can share the hardware resources and switch on finite time intervals. In other words, the idea behind time-sharing operating systems is the mechanism that works as follows: 

  • One program is using the hardware (CPU, RAM memory etc.) for some time, let say 20ns (nanoseconds), then it is stopped.
  • Now the hardware resources are available for another program for an equal amount of time, 20ns.

 Due to the very small intervals (very fast switching) there is an illusion that multiple programs are running concurrently. The very same principle is present in every modern operating system.

 In addition to time-sharing capabilities, Multics was designed with the idea of a modular hardware structure and software architecture. It is consistent with many small “building blocks”. Each block can be independently swapped with another one that does the same function but maybe in a different way. The final result is building a system that can grow by time and reusing the blocks instead of reimplementing them. So when there is a hardware change, only a few blocks are being updated, the rest are being reused. If the very same feature is required by multiple programs they can share a common implementation. For example, many programs can share the same implementation for transforming a word to lowercase thus saving time, effort, frustration among developers, and money. Those are the fundamentals of Multics.

Besides all the goodness of Multics, Dennis Ritchie and Ken Thompson (at the time employed in Bell Labs) were not satisfied with all aspects of the project. Mostly by the size and the complexity introduced to achieve the goals. In their spare time they started working on a similar hobby project (actually reimplementation of Multics) named Unics (Uniplexed Information and Computing Service) also known as Unix. As you can see the name Unics is influenced by Multics with the only difference being swapping the word “multiplexed” with “uniplexed”. The reason for this swap is the technical disadvantages of the Unics project at the very beginning. It could not handle the execution of multiple programs simultaneously, only one single program at a time, so uniplexed was used. It is worth mentioning that the Unics project was intended only for internal use inside Bell Labs and it was developed without any organizational backing.

Since we reached the birth of Unics, it’s time for a small recap:

  1. Multics development started in the late 1960s
  2. Multics goals are still valuable as of today. It’s time-sharing (multi-tasking)
  3. Complaints about the size and complexity
  4. In the early 1970s, Unics development begins but on a smaller scale to overcome the disadvantages of Multics. It is a hobby project of Dennis Ritchie and Ken Thompson.

Let’s continue with more details about Unics and its development.

Unix

Unics was initially written in assembly language. Because of this, most of the code was hardcoded for specific hardware and not easily portable to other computers. No better alternative was available at the time.

Meanwhile, the C programming language is released (also by Dennis Ritchie and Ken Thompson). The intention of this programming language is to be used for writing portable programs. It is achieved by requiring a relatively simple compiler, efficiently mapping to machine instructions, requiring minimal run-time support etc. For non-technical people, this is truly amazing.

At this moment in time, there is Unics, but it’s not portable, and there’s a new programming language that offers portability. Sounds like a wise idea – Unics to be rewritten in C. In the mid-1970s, Unics is being rewritten in C, introducing portability, but there was a legal issue preventing public release and wide use.

 

From a business and legal perspective things are quite interesting. There is a giant telecommunication company named AT&T that owns the previously mentioned Bell Labs research center. Due to the nature of the business and how available the technology was at that time, AT&T was considered as a government controlled monopoly. To simplify things, prices of the telecommunication services are controlled by the government so they can not skyrocket high, but also AT&T cannot go to bankruptcy due to the guaranteed income stated by the government. The point is that Bell Labs have a stable source of income (founded by AT&T) and can afford to allocate resources to whatever task they want with little to no worry about the cost. Almost complete freedom which is quite good for a research center.

Because of the monopoly issues and other legal stuff, AT&T was forbidden to enter into the computer market, only telecommunication services were allowed. All they could do was to license the source code of the Unics. It is worth mentioning that the source code is being distributed through other research centers and universities for further research, development, and collaboration, but under the corresponding license terms.

Later, there is a separation between Bell Labs and AT&T. Since the government controlled monopoly is on AT&T, Bell Labs is free after the separation so no legal issues are present anymore.

System V and BSD

By the 1980s, AT&T released a commercial version of Unics named System 5 (known as System V).

In meantime, while AT&T was working on System 5, at Berkeley University of California, the development of previously shared code from Bell Labs continues and very similar Unics operating system is being developed and released as BSD (Berkeley Software Distribution).

It’s time for a recap:

Initial development of Unics is being done at Bell Labs Unics source code is shared among universities and other researches Separation of Bell Labs and AT&T AT&T continues with the development of their own version of Unics named System 5 At Berkeley University of California, development of the previously shared source code is continued and another operating system is released as BSD (Berkeley Software Distribution).

  • So by the mid-80s, we already have two different Unics distros (System 5 and BSD) evolved by their own, but sharing a common predecessor.
  •  There is no such thing as “the real” or “the genuine” Unics operating system. Аs time passes there are even more variants of what was available at those two branches.
  •  HP branches out developing an operating system named HP-UX (Hawelt Packard Unix). Sun branches out with an operating system named Solaris. IBM branches out and continues developing their version named AIX.

It is worth mentioning that all of these branch outs are being done to provide some unique features in order for a given vendor to offer a better product on the market. For example, the networking stack is firstly available on the BSD branch, but later cross-ported to all other branches. Almost every nice feature was cross ported at some time to all other branches. To overcome the issues while cross porting features, and to optimize reusability at a higher level, the POSIX (Portable Operating System Interface X) is being introduced by the IEEE Computer Society in 1988. This is a standard that if followed by the vendors, compatibility between operating systems will be guaranteed, thus programs will be executable on other operating systems with no modifications being required.

Although reusability is present to some degree, the addition of new features requires a lot of work, thus makes development slower and harder. This is due to the inheritance of the terms and conditions of the AT&T license under which the Unics source code was distributed. To eliminate all the legal issues about sharing the source code, people working on the BSD branch started replacing the original source file inherited from AT&T with their own implementation, but releasing it with the BSD license that is more liberal in terms of reusability, modifications, distribution etc. The idea is to release the Unics operating system without any restrictions. Today, this is known as free software. Free as in freedom to study, modify and distribute the modified version without any legal actions against the developer. 

This idea was not welcomed by AT&T, so there was a lawsuit. It turns out that there is no violation of any kind. The trend of replacing files continues and BSD version 4.4 (also known as BSD Lite) was released free from any source code originating from AT&T. 

One more recap: 

  • Many branch outs.
  • POSIX standard
  • It turns out that many features are being cross ported sooner or later.
  • Hard to say what is the “root” or “genuine” Unics operating system anymore. Everything is being branched from the same predecessor and every feature cross ported thus everything is more or less a variation of the same OS.
  • Due to legal issues that come from the contents of the AT&T license, development was hard and redundancy was common.
  • BSD started removing all the files originating from AT&T and providing source files that are free for modification and redistribution.

Now it is time to mention the GNU project.

GNU

gnurules.png

 

(GNU’s Not Unix), a free software, mass collaboration project announced in 1983. Its aim is to provide users freedom and control in their use of their computers and electronic devices.

Can you spot the similar idea with what people behind BSD are doing already?

Both are somehow related to the term free software. but with a very big difference in how free software should be treated and that is obvious by comparing the GPL license (released by GNU) and BSD license. Basically, it comes down to:

  • The BSD License is less restrictive. It says do whatever you want with the source code. No restrictions of any kind.
  • The GPL License is more restrictive but in a good way. It puts emphasis on preventing the use of open source code (GPL licensed) in proprietary closed source applications. It states that if any GPL licensed source code is being used, the source of your code must be released under the same license too. Basically, with the GPL license, you can take whatever you want, but you must give back whatever you produce, thus raising the amount of available free software.
  • As a comparison, the BSD license does not state that whatever is being produced must be released as free software too. It can be released as proprietary closed source software without sharing any of the source code.

In addition to the license, the GNU project is developing a lot of software that is required in order to have a fully functional operating system. Some of their tools are GNU C library, GNU Compiler Collection (GCC), GNOME desktop environment etc. All of which are currently used in popular Linux distros.

Having all this in mind let’s talk about Linux by briefly explaining what it is.

Linux

tux-linux01011101.png

Linux is not an operating system like BSD. Linux is a Kernel.

But what is the difference between a Kernel and an Operating System?

  • An operating system is a collection of many things working as a whole, fully functional, complete product.
  • A kernel is only a piece of the whole operating system.
  • In terms of the Linux kernel, it can be said that it is nothing more than a bunch of drivers. Even though there is a bit more, for this purpose, we will ignore the rest.

Now, what are drivers? A Driver is a program that can handle the utilization of a specific piece of hardware.

Short recap:

  • A Driver is a program that handles the utilization of a specific piece of hardware.
  • Linux is just a bunch of drivers (and something more that will be ignored for now)
  • Linux is a kernel.
  • A Kernel is a piece of an Operating System.

I assume we are all clear by now so we can begin with the Linux history lesson.

Its origins are in Finland in the 1990s, about 20 years later than Unics. Linus Torvalds at that time was a student and was influenced by the GNU license and Minix (Unics based operating system for education). He liked many things about Unics operating systems, but also disliked some of them. As a result, he started working on his own operating system utilizing many of the GNU tools already available. The end result is that Linus developed only a Kernel (the drivers). Sometimes the Linux-based operating systems are referred to as GNU-Linux operating systems because, without GNU tools, the Linux Kernel is useless in real life.

It can be said that Linux, to some point, is just a reimplementation of what was available as the Unics operating system (BSD, System 5…) but with a license that puts more emphasis on keeping the software free by enforcing modifications to be contributed back, thus available for studying, modifications, and further distribution.

The time-sharing capabilities that allow multitasking, the C programing language for providing portability, modular software design that allows swapping a single peace when needed and reusing the rest, and other stuff are inherited from Unics. Those are the fundamentals mentioned at the very beginning of this post. But not sharing any source code with Unix.

It is worth mentioning that Linux was intended to be a small school project. Many computer scientists were interested in trying it out of curiosity.

While Linux was still young, the lawsuit between BSD and AT&T was ongoing. Due to the uncertainty in BSD’s features, many companies that utilized BSD moved to Linux as a very similar alternative with more stable features. Linux was also one single source of code while the BSD source was distributed on many independent branches (BSD, Solaris, HP-UX, AIX etc.)

From the perspective of a company, requiring an operating system for their product (Wi-Fi routers, cable TV boxes etc.) Linux was a better choice. Having a single branch guarantees that all the features merged in the one feature will be available right away. Having a single branch is simpler for maintenance too. On the BSD side, due to the independent development, those new features still required some sort of cross porting which sometimes breaks something else.

This is the historical reason of why Linux gained great popularity even in the early stages of its development, while still not being on pair with BSD and lacking many features.

Unics vs Unix, Multics vs Multix

Did you notice that sometimes the term Unics is used instead of Unix?

The fun fact is that the original name of the project is Unics, but somehow people started calling it Unix. There are many stories about why Unix becomes a popular name, but no one can tell for sure. Now the name Unix is accepted as the official name of the project.

The very same is happening with Multics, with time everyone was calling it Multix even though it was not its official name.

Conclusion – Unix vs Linux

unixhistory.jpg

A timeline of Unix-like OSes

At this moment we know the history and the evolution of the operating systems, we do know why all these branch outs occurred, we do know how government policy can influence things. The whole story can be summarised as:

  • Unix was an operating system back in the 1960s and 1970s while being developed in Bell Labs. With all that branching mentioned above, and the cross porting features between branches, it is simply a chaotic situation and hard to say what is the genuine Unix anymore.
  • It can be said that the most genuine Unix operating systems are System 5 and BSD.
  • System 5 is developed by AT&T as a continuation of the work done at Bell Labs after their separation.
  • The most popular direct ancestor of Unix is the BSD project. It took all the source code from what was developed in Bell Labs, then replaced any source code released under a restrictive license and continued it as free distribution.
  • Other popular distributions of today are Free BSD, Open BSD, Net BSD, but many more are available.
  • Linux, on the other hand, does not share any code with Unix (from Bell Labs), it just follows the same principle of utilizing small building blocks to produce something of bigger value. This is mostly known as writing a small program that does one thing and does it well. Later, those programs are combined with mechanisms known as “pipes” and “redirection”, so the output of one program becomes the input to another program and as the data flows, something of bigger values is achieved as a final result.
  • In terms of Licenses, Unics has a very restrictive license policy when developed. Later, it’s forked under free licenses (BSD). Linux, on the other hand, is using the GPL license from the very beginning.
  • Both are following the POSIX standard so program compatibility is guaranteed.
  • Both are using the same shell for interfacing with the kernel. It is Bash by default.
  • BSD is distributed as a whole system.
  • Linux-based operating systems are made with the Linux Kernel in combination with GNU software and many other smaller utilities that fulfill each other to accomplish the goal.
  • Popular Linux Distributions: Ubuntu, Mint, CentOS, Debian, Fedora, Red Hat, Arch Linux, and many more. There are hundreds of distros nowadays, some of them even optimized for a specific purpose, like gaming or for old computers.

Even though we stated that there is one single source – the Linux Kernel, there are many Linux Distributions (Linux based operating systems). This may be confusing for someone so I will explain this just in case:

Every Linux distribution (distro) ships different versions of the Linux Kernel or the tools, or simply utilizing different building blocks. For example, Ubuntu is using SystemD as an init system, but Slackware is using SysV as the equivalent. There is nothing wrong with both, they do the same thing with some differences and there is a use case when one is better than the other,

Another example is that there are users who prefer to always have the latest version of the software, they use rolling release Linux based operating systems like Arch Linux. Other may prefer a stable environment with no major changes in 5 or more years, Ubuntu LTS (Long Term Support) version is ideal for this use case, which is why is widely used in servers along with CentOS.

As you can see there are even more similarities of both. Linux based operating systems are in the same “chaotic” situation too. There is no such thing as the real or the genuine Linux based operating system. There are many of them, but at least they do share the same source of the Linux Kernel,

It is worth mentioning that programs written for Linux based operating systems or bash commands that are following the POSIX standards can be executed on any Unix based operating systems too. Thus all major software like Firefox, or the GNOME desktop environment is available everywhere without requiring any modifications.

Another fun fact not mentioned before is that even the Mac OS (used in Apple computers) is considered as a BSD derivative. Not every release, but some of them are.

As you can see, in reality, things are even more complicated and interesting.

BannerFinalGNULINUZROCKS

Published in GNU/Linux Rules!
Page 1 of 4
Our website is protected by DMC Firewall!