Andromeda Computer - Blog
Tuesday, 15 October 2019 16:36

Benefits of centralizing GNOME in GitLabs

The GNOME project's decision to centralize on GitLab is creating benefits across the community—even beyond the developers.

 gnomeandro.png

 

“What’s your GitLab?” is one of the first questions I was asked on my first day working for the GNOME Foundation—the nonprofit that supports GNOME projects, including the desktop environment, GTK, and GStreamer. The person was referring to my username on GNOME’s GitLab instance. In my time with GNOME, I’ve been asked for my GitLab a lot.

We use GitLab for basically everything. In a typical day, I get several issues and reference bug reports, and I occasionally need to modify a file. I don’t do this in the capacity of being a developer or a sysadmin. I’m involved with the Engagement and Inclusion & Diversity (I&D) teams. I write newsletters for Friends of GNOME and interview contributors to the project. I work on sponsorships for GNOME events. I don’t write code, and I use GitLab every day.

 

The GNOME project has been managed a lot of ways over the past two decades. Different parts of the project used different systems to track changes to code, collaborate, and share information both as a project and as a social space. However, the project made the decision that it needed to become more integrated and it took about a year from conception to completion. There were a number of reasons GNOME wanted to switch to a single tool for use across the community. External projects touch GNOME, and providing them an easier way to interact with resources was important for the project, both to support the community and to grow the ecosystem. We also wanted to better track metrics for GNOME—the number of contributors, the type and number of contributions, and the developmental progress of different parts of the project.

When it came time to pick a collaboration tool, we considered what we needed. One of the most important requirements was that it must be hosted by the GNOME community; being hosted by a third party didn’t feel like an option, so that discounted services like GitHub and Atlassian. And, of course, it had to be free software. It quickly became obvious that the only real contender was GitLab. We wanted to make sure contribution would be easy. GitLab has features like single sign-on, which allows people to use GitHub, Google, GitLab.com, and GNOME accounts.

We agreed that GitLab was the way to go, and we began to migrate from many tools to a single tool. GNOME board member Carlos Soriano led the charge. With lots of support from GitLab and the GNOME community, we completed the process in May 2018.

There was a lot of hope that moving to GitLab would help grow the community and make contributing easier. Because GNOME previously used so many different tools, including Bugzilla and CGit, it’s hard to quantitatively measure how the switch has impacted the number of contributions. We can more clearly track some statistics though, such as the nearly 10,000 issues closed and 7,085 merge requests merged between June and November 2018. People feel that the community has grown and become more welcoming and that contribution is, in fact, easier.

People come to free software from all sorts of different starting points, and it’s important to try to even out the playing field by providing better resources and extra support for people who need them. Git, as a tool, is widely used, and more people are coming to participate in free software with those skills ready to go. Self-hosting GitLab provides the perfect opportunity to combine the familiarity of Git with the feature-rich, user-friendly environment provided by GitLab.

It’s been a little over a year, and the change is really noticeable. Continuous integration (CI) has been a huge benefit for development, and it has been completely integrated into nearly every part of GNOME. Teams that aren’t doing code development have also switched to using the GitLab ecosystem for their work. Whether it’s using issue tracking to manage assigned tasks or version control to share and manage assets, even teams like Engagement and I&D have taken up using GitLab.

It can be hard for a community, even one developing free software, to adapt to a new technology or tool. It is especially hard in a case like GNOME, a project that recently turned 22. After more than two decades of building a project like GNOME, with so many parts used by so many people and organizations, the migration was an endeavor that was only possible thanks to the hard work of the GNOME community and generous assistance from GitLab.

I find a lot of convenience in working for a project that uses Git for version control. It’s a system that feels comfortable and is familiar—it’s a tool that is consistent across workplaces and hobby projects. As a new member of the GNOME community, it was great to be able to jump in and just use GitLab. As a community builder, it’s inspiring to see the results: more associated projects coming on board and entering the ecosystem; new contributors and community members making their first contributions to the project; and increased ability to measure the work we’re doing to know it’s effective and successful.

It’s great that so many teams doing completely different things (such as what they’re working on and what skills they’re using) agree to centralize on any tool—especially one that is considered a standard across open source. As a contributor to GNOME, I really appreciate that we’re using GitLab.

 BannerFinalGNULINUZROCKS

Published in GNU/Linux Rules!

DevSecOps evolves DevOps to ensure security remains an essential part of the process.

devsecopsandro.jpg 

DevOps is well-understood in the IT world by now, but it's not flawless. Imagine you have implemented all of the DevOps engineering practices in modern application delivery for a project. You've reached the end of the development pipeline—but a penetration testing team (internal or external) has detected a security flaw and come up with a report. Now you have to re-initiate all of your processes and ask developers to fix the flaw.

This is not terribly tedious in a DevOps-based software development lifecycle (SDLC) system—but it does consume time and affects the delivery schedule. If security were integrated from the start of the SDLC, you might have tracked down the glitch and eliminated it on the go. But pushing security to the end of the development pipeline, as in the above scenario, leads to a longer development lifecycle.

This is the reason for introducing DevSecOps, which consolidates the overall software delivery cycle in an automated way.

In modern DevOps methodologies, where containers are widely used by organizations to host applications, we see greater use of Kubernetes and Istio. However, these tools have their own vulnerabilities. For example, the Cloud Native Computing Foundation (CNCF) recently completed a Kubernetes security audit that identified several issues. All tools used in the DevOps pipeline need to undergo security checks while running in the pipeline, and DevSecOps pushes admins to monitor the tools' repositories for upgrades and patches.

 

What Is DevSecOps?

Like DevOps, DevSecOps is a mindset or a culture that developers and IT operations teams follow while developing and deploying software applications. It integrates active and automated security audits and penetration testing into agile application development.

 

To utilize DevSecOps, you need to:

Introduce the concept of security right from the start of the SDLC to minimize vulnerabilities in software code. Ensure everyone (including developers and IT operations teams) shares responsibility for following security practices in their tasks. Integrate security controls, tools, and processes at the start of the DevOps workflow. These will enable automated security checks at each stage of software delivery. DevOps has always been about including security—as well as quality assurance (QA), database administration, and everyone else—in the dev and release process. However, DevSecOps is an evolution of that process to ensure security is never forgotten as an essential part of the process.

 

Understanding the DevSecOps pipeline

There are different stages in a typical DevOps pipeline; a typical SDLC process includes phases like Plan, Code, Build, Test, Release, and Deploy. In DevSecOps, specific security checks are applied in each phase.

 

Plan: Execute security analysis and create a test plan to determine scenarios for where, how, and when testing will be done.

Code: Deploy linting tools and Git controls to secure passwords and API keys.

Build: While building code for execution, incorporate static application security testing (SAST) tools to track down flaws in code before deploying to production. These tools are specific to programming languages.

Test: Use dynamic application security testing (DAST) tools to test your application while in runtime. These tools can detect errors associated with user authentication, authorization, SQL injection, and API-related endpoints.

Release: Just before releasing the application, employ security analysis tools to perform thorough penetration testing and vulnerability scanning.

Deploy: After completing the above tests in runtime, send a secure build to production for final deployment.

 

DevSecOps tools

Tools are available for every phase of the SDLC. Some are commercial products, but most are open source. In my next article, I will talk more about the tools to use in different stages of the pipeline.

DevSecOps will play a more crucial role as we continue to see an increase in the complexity of enterprise security threats built on modern IT infrastructure. However, the DevSecOps pipeline will need to improve over time, rather than simply relying on implementing all security changes simultaneously. This will eliminate the possibility of backtracking or the failure of application delivery.

 

 BannerFinalGNULINUZROCKS

Published in GNU/Linux Rules!

BANNERGnulinuxrocks

Access your Android device from your PC with this open source application based on scrcpy.

 

 

In the future, all the information you need will be just one gesture away, and it will all appear in midair as a hologram that you can interact with even while you're driving your flying car. That's the future, though, and until that arrives, we're all stuck with information spread across a laptop, a phone, a tablet, and a smart refrigerator. Unfortunately, that means when we need information from a device, we generally have to look at that device.

While not quite holographic terminals or flying cars, guiscrcpy by developer Srevin Saju is an application that consolidates multiple screens in one location and helps to capture that futuristic feeling.

Guiscrcpy is an open source (GNU GPLv3 licensed) project based on the award-winning scrcpy open source engine. With guiscrcpy, you can cast your Android screen onto your computer screen so you can view it along with everything else. Guiscrcpy supports Linux, Windows, and MacOS.

Unlike many scrcpy alternatives, Guiscrcpy is not a fork of scrcpy. The project prioritizes collaborating with other open source projects, so Guiscrcpy is an extension, or a graphical user interface (GUI) layer, for scrcpy. Keeping the Python 3 GUI separate from scrcpy ensures that nothing interferes with the efficiency of the scrcpy backend. You can screencast up to 1080p resolution and, because it uses ultrafast rendering and surprisingly little CPU, it works even on a relatively low-end PC.

Scrcpy, Guiscrcpy's foundation, is a command-line application, so it doesn't have GUI buttons to handle gestures, it doesn't provide a Back or Home button, and it requires familiarity with the Linux terminal. Guiscrcpy adds GUI panels to scrcpy, so any user can run it—and cast and control their device—without sending any information over the internet. Everything works over USB or WiFi (using only a local network). Guiscrcpy also adds a desktop launcher to Linux and Windows systems and provides compiled binaries for Linux and Windows.

 

Installing Guiscrcpy

Before installing Guiscrcpy, you must install its dependencies, most notably scrcpy. Possibly the easiest way to install scrcpy is with snap, which is available for most major Linux distributions. If you have snap installed and active, then you can install scrcpy with one easy command:

 

$ sudo snap install scrcpy

 

While it's installing, you can install the other dependencies. The Simple DirectMedia Layer (SDL 2.0) toolkit is required to display and interact with the phone screen, and the Android Debug Bridge (adb) command connects your computer to your Android phone.

On Fedora or CentOS:

 

 

$ sudo dnf install SDL2 android-tools

 

On Ubuntu or Debian:

 

$ sudo apt install SDL2 android-tools-adb

 

In another terminal, install the Python dependencies:

 

$ python3 -m pip install -r requirements.txt --user

 

Setting up your phone

 

For your phone to accept an adb connection, it must have Developer Mode enabled. To enable Developer Mode on Android, go to Settings and select About phone. In About phone, find the Build number (it may be in the Software information panel). Believe it or not, to enable Developer Mode, tap Build number seven times in a row.

 

developer-mode.jpg

 

For full instructions on all the many ways you can configure your phone for access from your computer, read the Android developer documentation.

Once that's set up, plug your phone into a USB port on your computer (or ensure that you've configured it correctly to connect over WiFi).

 

Using guiscrcpy

When you launch guiscrcpy, you see its main control window. In this window, click the Start scrcpy button. This connects to your phone, as long as it's set up in Developer Mode and connected to your computer over USB or WiFi.

 guiscrcpy-main.png

 

It also includes a configuration-writing system, where you can write a configuration file to your ~/.config directory to preserve your preferences between uses.

The bottom panel of guiscrcpy is a floating window that helps you perform basic controlling actions. It has buttons for Home, Back, Power, and more. These are common functions on Android devices, but an important feature of this module is that it doesn't interact with scrcpy's SDL, so it can function with no lag. In other words, this panel communicates directly with your connected device through adb rather than scrcpy.

guiscrcpy-bottompanel.png

 

The project is in active development and new features are still being added. The latest build has an interface for gestures and notifications.

With guiscrcpy, you not only see your phone on your screen, but you can also interact with it, either by clicking the SDL window itself, just as you would tap your physical phone, or by using the buttons on the panels.

 guiscrcpy-screenshot.jpg

 

Guiscrcpy is a fun and useful application that provides features that ought to be official features of any modern device, especially a platform like Android. Try it out yourself, and add some futuristic pragmatism to your present-day digital life.

 

BannerFinalGNULINUZROCKS 

Published in GNU/Linux Rules!
Wednesday, 08 May 2019 23:04

Using rsync to back up your Linux system

Find out how to use rsync in a backup scenario.

Published in GNU/Linux Rules!

Basic rsync commands are usually enough to manage your Linux backups, but a few extra options add speed and power to large backup sets.

 

It seems clear that backups are always a hot topic in the Linux world. Back in 2017, David Both offered Opensource.com readers tips on "Using rsync to back up your Linux system," and earlier this year, he published a poll asking us, "What's your primary backup strategy for the /home directory in Linux?" In another poll this year, Don Watkins asked, "Which open source backup solution do you use?"

My response is rsync. I really like rsync! There are plenty of large and complex tools on the market that may be necessary for managing tape drives or storage library devices, but a simple open source command line tool may be all you need.

Basic rsync

I managed the binary repository system for a global organization that had roughly 35,000 developers with multiple terabytes of files. I regularly moved or archived hundreds of gigabytes of data at a time. Rsync was used. This experience gave me confidence in this simple tool. (So, yes, I use it at home to back up my Linux systems.)

 

The basic rsync command is simple.

rsync -av SRC DST

Indeed, the rsync commands taught in any tutorial will work fine for most general situations. However, suppose we need to back up a very large amount of data. Something like a directory with 2,000 sub-directories, each holding anywhere from 50GB to 700GB of data. Running rsync on this directory could take a tremendous amount of time, particularly if you're using the checksum option, which I prefer.

Performance is likely to suffer if we try to sync large amounts of data or sync across slow network connections. Let me show you some methods I use to ensure good performance and reliability.

 

Advanced rsync

One of the first lines that appears when rsync runs is: "sending incremental file list." If you do a search for this line, you'll see many questions asking things like: why is it taking forever? or why does it seem to hang up?

Here's an example based on this scenario. Let's say we have a directory called /storage that we want to back up to an external USB device mounted at /media/WDPassport.

If we want to back up /storage to a USB external drive, we could use this command:

rsync -cav /storage /media/WDPassport

 

The c option tells rsync to use file checksums instead of timestamps to determine changed files, and this usually takes longer. In order to break down the /storage directory, I sync by subdirectory, using the find command. Here's an example:

find /storage -type d -exec rsync -cav {} /media/WDPassport \;

 

This looks OK, but if there are any files in the /storage directory, they will not be copied. So, how can we sync the files in /storage? There is also a small nuance where certain options will cause rsync to sync the . directory, which is the root of the source directory; this means it will sync the subdirectories twice, and we don't want that.

Long story short, the solution I settled on is a "double-incremental" script. This allows me to break down a directory, for example, breaking /home into the individual users' home directories or in cases when you have multiple large directories, such as music or family photos.

Here is an example of my script:

HOMES="alan"
DRIVE="/media/WDPassport"

for HOME in $HOMES; do
     cd /home/$HOME
     rsync -cdlptgov --delete . /$DRIVE/$HOME
     find . -maxdepth 1 -type d -not -name "." -exec rsync -crlptgov --delete {} /$DRIVE/$HOME \;
done

 

The first rsync command copies the files and directories that it finds in the source directory. However, it leaves the directories empty so we can iterate through them using the find command. This is done by passing the d argument, which tells rsync not to recurse the directory.

-d, --dirs                  transfer directories without recursing

 

The find command then passes each directory to rsync individually. Rsync then copies the directories' contents. This is done by passing the r argument, which tells rsync to recurse the directory.

-r, --recursive             recurse into directories

 

This keeps the increment file that rsync uses to a manageable size.

Most rsync tutorials use the a (or archive) argument for convenience. This is actually a compound argument.

-a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X)

 

The other arguments that I pass would have been included in the a; those are lptg, and o.

-l, --links                 copy symlinks as symlinks
-p, --perms                 preserve permissions
-t, --times                 preserve modification times
-g, --group                 preserve group
-o, --owner                 preserve owner (super-user only)

 

The --delete option tells rsync to remove any files on the destination that no longer exist on the source. This way, the result is an exact duplication. You can also add an exclude for the .Trash directories or perhaps the .DS_Store files created by MacOS.

-not -name ".Trash*" -not -name ".DS_Store"

 

Be careful

One final recommendation: rsync can be a destructive command. Luckily, its thoughtful creators provided the ability to do "dry runs." If we include the noption, rsync will display the expected output without writing any data.

rsync -cdlptgovn --delete . /$DRIVE/$HOME

This script is scalable to very large storage sizes and large latency or slow link situations. I'm sure there is still room for improvement, as there always is. If you have suggestions, please share them in the comments.

Source: opensource.com

 

 

 

Marielle Price 
Published in GNU/Linux Rules!
DMC Firewall is a Joomla Security extension!