Andromeda Computer - Displaying items by tag: technology
Monday, 30 September 2019 10:02

GNU Debugger: Practical tips

BANNERGnulinuxrocks

Learn how to use some of the lesser-known features of gdb to inspect and fix your code.

  

 

The GNU Debugger (gdb) is an invaluable tool for inspecting running processes and fixing problems while you're developing programs.

You can set breakpoints at specific locations (by function name, line number, and so on), enable and disable those breakpoints, display and alter variable values, and do all the standard things you would expect any debugger to do. But it has many other features you might not have experimented with. Here are five for you to try.

Conditional breakpoints

Setting a breakpoint is one of the first things you'll learn to do with the GNU Debugger. The program stops when it reaches a breakpoint, and you can run gdb commands to inspect it or change variables before allowing the program to continue.

For example, you might know that an often-called function crashes sometimes, but only when it gets a certain parameter value. You could set a breakpoint at the start of that function and run the program. The function parameters are shown each time it hits the breakpoint, and if the parameter value that triggers the crash is not supplied, you can continue until the function is called again. When the troublesome parameter triggers a crash, you can step through the code to see what's wrong.

 

(gdb) break sometimes_crashes

Breakpoint 1 at 0x40110e: file prog.c, line 5.

(gdb) run

[...]

Breakpoint 1, sometimes_crashes (f=0x7fffffffd1bc) at prog.c:5

5 fprintf(stderr,

(gdb) continue

Breakpoint 1, sometimes_crashes (f=0x7fffffffd1bc) at prog.c:5

5 fprintf(stderr,

(gdb) continue

 

To make this more repeatable, you could count how many times the function is called before the specific call you are interested in, and set a counter on that breakpoint (for example, "continue 30" to make it ignore the next 29 times it reaches the breakpoint).

 

But where breakpoints get really powerful is in their ability to evaluate expressions at runtime, which allows you to automate this kind of testing. Enter: conditional breakpoints.

break [LOCATION] if CONDITION

(gdb) break sometimes_crashes if !f

Breakpoint 1 at 0x401132: file prog.c, line 5.

(gdb) run

[...]

Breakpoint 1, sometimes_crashes (f=0x0) at prog.c:5

5 fprintf(stderr,

(gdb)

 

 

Instead of having gdb ask what to do every time the function is called, a conditional breakpoint allows you to make gdb stop at that location only when a particular expression evaluates as true. If the execution reaches the conditional breakpoint location, but the expression evaluates as false, the debugger automatically lets the program continue without asking the user what to do.

 

Breakpoint commands

 

An even more sophisticated feature of breakpoints in the GNU Debugger is the ability to script a response to reaching a breakpoint. Breakpoint commands allow you to write a list of GNU Debugger commands to run whenever it reaches a breakpoint.

We can use this to work around the bug we already know about in the sometimes_crashes function and make it return from that function harmlessly when it provides a null pointer.

We can use silent as the first line to get more control over the output. Without this, the stack frame will be displayed each time the breakpoint is hit, even before our breakpoint commands run.

 

(gdb) break sometimes_crashes

Breakpoint 1 at 0x401132: file prog.c, line 5.

(gdb) commands 1

Type commands for breakpoint(s) 1, one per line.

End with a line saying just "end".

>silent

>if !f

>frame

>printf "Skipping call\n"

>return 0

>continue

>end

>printf "Continuing\n"

>continue

>end

(gdb) run

Starting program: /home/twaugh/Documents/GDB/prog

warning: Loadable section ".note.gnu.property" outside of ELF segments

Continuing

Continuing

Continuing

#0 sometimes_crashes (f=0x0) at prog.c:5

5 fprintf(stderr,

Skipping call

[Inferior 1 (process 9373) exited normally]

(gdb)

 

Dump binary memory

 

GNU Debugger has built-in support for examining memory using the x command in various formats, including octal, hexadecimal, and so on. But I like to see two formats side by side: hexadecimal bytes on the left, and ASCII characters represented by those same bytes on the right.

 

When I want to view the contents of a file byte-by-byte, I often use hexdump -C (hexdump comes from the util-linux package). Here is gdb's x command displaying hexadecimal bytes:

(gdb) x/33xb mydata
0x404040 <mydata>:    0x02    0x01    0x00    0x02    0x00    0x00    0x00    0x01
0x404048 <mydata+8>:    0x01    0x47    0x00    0x12    0x61    0x74    0x74    0x72
0x404050 <mydata+16>:    0x69    0x62    0x75    0x74    0x65    0x73    0x2d    0x63
0x404058 <mydata+24>:    0x68    0x61    0x72    0x73    0x65    0x75    0x00    0x05
0x404060 <mydata+32>:    0x00

 

 

What if you could teach gdb to display memory just like hexdump does? You can, and in fact, you can use this method for any format you prefer.

 

By combining the dump command to store the bytes in a file, the shell command to run hexdump on the file, and the define command, we can make our own new hexdump command to use hexdump to display the contents of memory.

 

 

(gdb) define hexdump

Type commands for definition of "hexdump".

End with a line saying just "end".

>dump binary memory /tmp/dump.bin $arg0 $arg0+$arg1

>shell hexdump -C /tmp/dump.bin

>end

 

 

 

Those commands can even go in the ~/.gdbinit file to define the hexdump command permanently. Here it is in action:

  

(gdb) hexdump mydata sizeof(mydata) 00000000 02 01 00 02 00 00 00 01 01 47 00 12 61 74 74 72 |.........G..attr| 00000010 69 62 75 74 65 73 2d 63 68 61 72 73 65 75 00 05 |ibutes-charseu..| 00000020 00 |.| 00000021





 

Inline disassembly

 

Sometimes you want to understand more about what happened leading up to a crash, and the source code is not enough. You want to see what's going on at the CPU instruction level.

The disassemble command lets you see the CPU instructions that implement a function. But sometimes the output can be hard to follow. Usually, I want to see what instructions correspond to a certain section of source code in the function. To achieve this, use the /s modifier to include source code lines with the disassembly.

 

(gdb) disassemble/s main
Dump of assembler code for function main:
prog.c:
11    {
   0x0000000000401158 <+0>:    push   %rbp
   0x0000000000401159 <+1>:    mov      %rsp,%rbp
   0x000000000040115c <+4>:    sub      $0x10,%rsp

12      int n = 0;
   0x0000000000401160 <+8>:    movl   $0x0,-0x4(%rbp)

13      sometimes_crashes(&n);
   0x0000000000401167 <+15>:    lea     -0x4(%rbp),%rax
   0x000000000040116b <+19>:    mov     %rax,%rdi
   0x000000000040116e <+22>:    callq  0x401126 <sometimes_crashes>
[...snipped...]

 

 

This, along with info registers to see the current values of all the CPU registers and commands like stepi to step one instruction at a time, allow you to have a much more detailed understanding of the program.

Reverse debug

 

Sometimes you wish you could turn back time. Imagine you've hit a watchpoint on a variable. A watchpoint is like a breakpoint, but instead of being set at a location in the program, it is set on an expression (using the watch command). Whenever the value of the expression changes, execution stops, and the debugger takes control.

So imagine you've hit this watchpoint, and the memory used by a variable has changed value. This can turn out to be caused by something that occurred much earlier; for example, the memory was freed and is now being re-used. But when and why was it freed?

The GNU Debugger can solve even this problem because you can run your program in reverse!

It achieves this by carefully recording the state of the program at each step so that it can restore previously recorded states, giving the illusion of time flowing backward.

To enable this state recording, use the target record-full command. Then you can use impossible-sounding commands, such as:

 

 

reverse-step, which rewinds to the previous source line

reverse-next, which rewinds to the previous source line, stepping backward over function calls

reverse-finish, which rewinds to the point when the current function was about to be called

reverse-continue, which rewinds to the previous state in the program that would (now) trigger a breakpoint (or anything else that causes it to stop)

 

Here is an example of reverse debugging in action:

 (gdb) b main
Breakpoint 1 at 0x401160: file prog.c, line 12.
(gdb) r
Starting program: /home/twaugh/Documents/GDB/prog
[...]

Breakpoint 1, main () at prog.c:12
12      int n = 0;
(gdb) target record-full
(gdb) c
Continuing.

Program received signal SIGSEGV, Segmentation fault.
0x0000000000401154 in sometimes_crashes (f=0x0) at prog.c:7
7      return *f;
(gdb) reverse-finish
Run back to call of #0  0x0000000000401154 in sometimes_crashes (f=0x0)
        at prog.c:7
0x0000000000401190 in main () at prog.c:16
16      sometimes_crashes(0);

 

 

These are just a handful of useful things the GNU Debugger can do. There are many more to discover. Which hidden, little-known, or just plain amazing feature of gdb is your favorite? Please share it in the comments.

BannerFinalGNULINUZROCKS

Published in GNU/Linux Rules!

BANNERGnulinuxrocks

Access your Android device from your PC with this open source application based on scrcpy.

 

 

In the future, all the information you need will be just one gesture away, and it will all appear in midair as a hologram that you can interact with even while you're driving your flying car. That's the future, though, and until that arrives, we're all stuck with information spread across a laptop, a phone, a tablet, and a smart refrigerator. Unfortunately, that means when we need information from a device, we generally have to look at that device.

While not quite holographic terminals or flying cars, guiscrcpy by developer Srevin Saju is an application that consolidates multiple screens in one location and helps to capture that futuristic feeling.

Guiscrcpy is an open source (GNU GPLv3 licensed) project based on the award-winning scrcpy open source engine. With guiscrcpy, you can cast your Android screen onto your computer screen so you can view it along with everything else. Guiscrcpy supports Linux, Windows, and MacOS.

Unlike many scrcpy alternatives, Guiscrcpy is not a fork of scrcpy. The project prioritizes collaborating with other open source projects, so Guiscrcpy is an extension, or a graphical user interface (GUI) layer, for scrcpy. Keeping the Python 3 GUI separate from scrcpy ensures that nothing interferes with the efficiency of the scrcpy backend. You can screencast up to 1080p resolution and, because it uses ultrafast rendering and surprisingly little CPU, it works even on a relatively low-end PC.

Scrcpy, Guiscrcpy's foundation, is a command-line application, so it doesn't have GUI buttons to handle gestures, it doesn't provide a Back or Home button, and it requires familiarity with the Linux terminal. Guiscrcpy adds GUI panels to scrcpy, so any user can run it—and cast and control their device—without sending any information over the internet. Everything works over USB or WiFi (using only a local network). Guiscrcpy also adds a desktop launcher to Linux and Windows systems and provides compiled binaries for Linux and Windows.

 

Installing Guiscrcpy

Before installing Guiscrcpy, you must install its dependencies, most notably scrcpy. Possibly the easiest way to install scrcpy is with snap, which is available for most major Linux distributions. If you have snap installed and active, then you can install scrcpy with one easy command:

 

$ sudo snap install scrcpy

 

While it's installing, you can install the other dependencies. The Simple DirectMedia Layer (SDL 2.0) toolkit is required to display and interact with the phone screen, and the Android Debug Bridge (adb) command connects your computer to your Android phone.

On Fedora or CentOS:

 

 

$ sudo dnf install SDL2 android-tools

 

On Ubuntu or Debian:

 

$ sudo apt install SDL2 android-tools-adb

 

In another terminal, install the Python dependencies:

 

$ python3 -m pip install -r requirements.txt --user

 

Setting up your phone

 

For your phone to accept an adb connection, it must have Developer Mode enabled. To enable Developer Mode on Android, go to Settings and select About phone. In About phone, find the Build number (it may be in the Software information panel). Believe it or not, to enable Developer Mode, tap Build number seven times in a row.

 

developer-mode.jpg

 

For full instructions on all the many ways you can configure your phone for access from your computer, read the Android developer documentation.

Once that's set up, plug your phone into a USB port on your computer (or ensure that you've configured it correctly to connect over WiFi).

 

Using guiscrcpy

When you launch guiscrcpy, you see its main control window. In this window, click the Start scrcpy button. This connects to your phone, as long as it's set up in Developer Mode and connected to your computer over USB or WiFi.

 guiscrcpy-main.png

 

It also includes a configuration-writing system, where you can write a configuration file to your ~/.config directory to preserve your preferences between uses.

The bottom panel of guiscrcpy is a floating window that helps you perform basic controlling actions. It has buttons for Home, Back, Power, and more. These are common functions on Android devices, but an important feature of this module is that it doesn't interact with scrcpy's SDL, so it can function with no lag. In other words, this panel communicates directly with your connected device through adb rather than scrcpy.

guiscrcpy-bottompanel.png

 

The project is in active development and new features are still being added. The latest build has an interface for gestures and notifications.

With guiscrcpy, you not only see your phone on your screen, but you can also interact with it, either by clicking the SDL window itself, just as you would tap your physical phone, or by using the buttons on the panels.

 guiscrcpy-screenshot.jpg

 

Guiscrcpy is a fun and useful application that provides features that ought to be official features of any modern device, especially a platform like Android. Try it out yourself, and add some futuristic pragmatism to your present-day digital life.

 

BannerFinalGNULINUZROCKS 

Published in GNU/Linux Rules!

Deepin OS is one of the most modern-looking Linux distros. If you are a fan of sleek design and at the same time easy-to-use distro, then get your hands on Deepin. It is also extremely easy to install. I am sure you’ll love it.

The team has developed its own desktop environment based on Qt and also uses KDE plasma’s window manager aka. dde-kwin. Deepin team has also developed 30 native applications for users to make day-to-day tasks easier to complete.

Some of the native deepin applications are — Deepin installer, Deepin file manager, Deepin system monitor, Deepin Store, Deepin screen recorder, Deepin cloud print, and so on… If you ever run out of options, do not forget thousands of open source applications are also available in the store.

The development of Deepin started in 2004 under the name ‘Hiwix’ and it’s been active since then. The distro’s name was changed multiple times but the motto remained the same, provide a stable operating system which is easy to install and use.

The current version Deepin OS 15.11 is based on Debian stable branch. It was released on 19, July 2019 with some great features and many improvements and bug fixes.

 

 

 

 

Cloud sync

The most notable feature in this release is Cloud sync. This feature is useful if you have multiple machines running Deepin or you have to reset your deepin installation more often than others. The distro will keep your system settings in sync with cloud storage as soon as you sign on. In case the installation is reset, the settings can be quickly imported from the cloud. This feature will sync all the system settings such as themes, sound settings, update settings, wallpaper, dock, power settings, etc. Unfortunately, the cloud sync feature is only available for users with Deepin ID in mainland china.

They are testing the feature and will be releasing it soon for the rest of the Deepin users. There are other user-friendly focused Linux distributions which should develop this feature. The cloud sync is useful for new Linux users. They don’t have to set up everything from scratch if they mess things up with the current installation.

 

 

dde-kwin

Deepin switched from dde-wm to dde-kwin in 15.10. ddep-kwin consumes less memory and provides a faster and better user experience. Deepin 15.11 brings more stability to dde-kwin. Deepin Store The deepin team has developed 30 native applications, among those is Deepin store. It lets you easily browse and install applications from the distro repositories. The new release ships with Deepin Store 5.3. The updated store app can now determine the user region based on Deepin ID’s location. Another option has been added to the deepin file manager for burning files to CD/DVD. Though CD/DVD is the thing of the past but still if somebody needs to burn data to it, it’s extremely easy to do it in Deepin.

To play media, the distro ships with Deepin movie application which now supports drag-n-drop load subtitle feature. Just drag the subtitle script and drop it on the player while the movie is playing. Besides these new features, there are more improvements and bug fixes in Deepin 15.11. For people finding out a beautiful, feature-rich and stable Linux distribution, Deepin can be the platform of your choice.



BannerFinalGNULINUZROCKS
Published in GNU/Linux Rules!
Wednesday, 08 May 2019 19:04

Using rsync to back up your Linux system

Find out how to use rsync in a backup scenario.

Published in GNU/Linux Rules!

linux-16 (1).jpg

 

Linux has come a long way since 1991. These events mark its evolution.

1. Linus releases Linux

Linus Torvalds initially released Linux to the world in 1991 as a hobby. It didn't remain a hobby for long!

 

 

2. Linux distributions

In 1993, several Linux distributions were founded, notably DebianRed Hat, and Slackware. These were important because they demonstrated Linux's gains in market acceptance and development that enabled it to survive the tumultuous OS wars, browser wars, and protocol wars of the 1990s. In contrast, many established, commercial, and proprietary products did not make it past the turn of the millennium!

 

 

3. IBM's big investment in Linux

In 2000, IBM announced it would invest US$1 billion dollars in Linux. In his CNN Money article about the investment, Richard Richtmyer wrote: "The announcement underscores Big Blue's commitment to Linux and marks significant progress in moving the alternative operating system into the mainstream commercial market."

 

 

4. Hollywood adopts Linux

In 2002, it seemed the entire Hollywood movie industry adopted Linux. DisneyDreamworks, and Industrial Light & Magic all began making movies with Linux that year.

 

 

5. Linux for national security

In 2003, another big moment came with the US government's acceptance of Linux. Red Hat Linux was awarded the Department of Defense Common Operating Environment (COE) certification. This is significant because the government—intelligence and military agencies in particular—have very strict requirements for computing systems to prevent attacks and support national security. This opened the door for other agencies to use Linux. Later that year, the National Weather Service announced it would replace outdated systems with new computers running Linux.

 

 

6. The systems I managed

This "moment" is really a collection of my personal experiences. As my career progressed in the 2000s, I discovered several types of systems and devices that I managed were all running Linux. Some of the places I found Linux were VMware ESX, F5 Big-IP, Check Point UTM Edge, Cisco ASA, and PIX. This made me realize that Linux was truly viable and here to stay.

 

 

7. Ubuntu

In 2004, Canonical was founded by Mark Shuttleworth to provide an easy-to-use Linux desktop—Ubuntu Linux—based on the Debian distribution. I think Ubuntu Linux helped to expand the desktop Linux install base. It put Linux in front of many more people, from casual home users to professional software developers.

 

 

8. Google Linux

Google released two operating systems based on the Linux kernel: the Android mobile operating system in mid-2008 and Chrome OS, running on a Chromebook, in 2011. Since then, millions of Android mobile phones and Chromebooks have been sold.

 

 

9. The cloud is Linux

In the past 10 years or so, cloud computing has gone from a grandiose vision of computing on the internet to a reinvention of how we use computers personally and professionally. The big players in the cloud space are built on Linux, including Amazon Web ServicesGoogle Cloud Services, and Linode. Even in cases where we aren't certain, such as Microsoft Azure, running Linux workloads is well supported.

 

 

10. My car runs Linux

And so will yours! Many automakers began introducing Linux a few years ago. This led to the formation of the collaborative open source project called Automotive Grade Linux. Major car makers, such as Toyota and Subaru, have joined together to develop Linux-based automotive entertainment, navigation, and engine-management systems.

 

 

Share your favorite

Source: Opesource.com 

Author: Alan Formy-Duval

Marielle Price

 

Published in GNU/Linux Rules!

cool-linux-wallpaper-1.jpg

If you’ve come here looking to fix an errant recursive chmod or chown command on an RPM-based Linux system, then here is the quick solution. Run the following commands using root privileges:

rpm --setugids -a
rpm --setperms -a

The 

--setugids

 option to the rpm command sets user/group ownership of files in a given package.  By using the 

-a

 option we’re telling rpm to do this on all the packages. The 

--setperms

 option sets the permissions of files in the given package.

If this fixes your issue, great!  If not, or you want to be thorough, continue reading.

Why Would You Need To Fix the Permissions and User/Group Ownership of Files

The most common reason you’ll need to follow the procedure below is to recover from a chmod or chown command that didn’t do what you initially intended it to do.  Using this procedure can save you from having to perform a complete system restore or a complete system reinstall.

In any case, perhaps someone else accidentally executed a recursive chmod or chown command on part or even the entire file system.  Even if the mistake is noticed and the command is stopped by typing Control-C as quickly as possible, many files could have been changed in that short period of time and you won’t be able to immediately tell which files were changed.

Problems Caused by Incorrect Permissions and Ownerships of Files

Having improper file permissions or ownerships can cause processes and services to behave in unexpected ways, stop working immediately, or prevent them from starting once they’ve been stopped.

For example, if the user running the web server process can’t read the files it’s supposed to serve, then the service it provides is effectively broken.

If a service is already running, it probably doesn’t need to read its configuration file(s) again as it has that information in memory.  However, if it can’t read its configuration when it attempts to start, it simply isn’t going to start.

Also, when some services start, they create a lock file to indicate that the service is running.  When the service stops, it deletes the lock file. However, if the permissions on that lock file are changed while the service is running such that the service can’t delete the file, then of course the lock file won’t get deleted.  This will prevent the service from starting again as it thinks it’s already running due to the presence of the lock file.

Perhaps the file that actually needs to be executed no longer has execute permissions.  Needless to say, that will definitely keep a service from starting.

If you have a service such as a database that writes data, it needs the proper permissions to write data to file, create new files, and so on.

Those are some common issues you can run into when file permissions and ownerships are not set properly.

Examples of Errant chmod and chown Commands

A common way a chmod or chown command can go wrong is by using recursion while making a typing mistake or providing an incorrect path.  For example, let’s say you’ve created some configuration files in the /var/lib/pgsql directory as the root user. You want to make sure all those files are owned by the postgres user, so you intend to run this command:

chown -R postgres /var/lib/pgsql

However, you accidentally add a space between the leading forward slash and var, making the actual command executed this one:

chown -R postgres / var/lib/pgsql

Oh what a difference a space can make!  Now, every file on the system is owned by the postgres user.

The reason is because chown rightly interpreted the first forward slash ( “/” ) as an absolute path to operate upon and “var/lib/pgsql” as a relative path to operate on.  The chown command, and any Linux command for that matter, only does what you tell it to do. It can’t read your mind. It doesn’t know that you intended to only supply the one path of /var/lib/pgsql.

Fixing File Ownerships and Permissions with the RPM Command

Continuing with our example, you should be able to execute the following command with root privileges and return to a fairly stable state:

rpm --setugids -a

This command will return the owner and group membership for every file that was installed via an RPM package.  Changing the ownership of a file can cause the set-user-ID (SUID) or set-group-ID (GUID) permission bits to be cleared.  Because of this, we need to restore the permissions on the files as well:

rpm --setperms -a

Now every file that is known by rpm will have the same permissions as when it was initially installed.

By the way, use this same process to fix an errant chmod command, too.  Be sure to use the same order of the commands due the SUID and GUID issues that could arise.  IE, run rpm with the 

--setperms

 options last.

Fixing File Ownerships and Permissions for Files Not Known by RPM

Not all the files on the system are going to be part of an RPM package.  Most data, either transient or permanent, will live outside of an RPM package.  Examples include temporary files, files used to store database data, lock files, web site files, some configuration files, and more depending on the system in question.

At least check the most important services that the system provides.  For example, if you are working on a database server, make sure the database service starts correctly.  If it’s a web server, make sure the web server service is functioning.

Here is the pattern:

systemctl restart SERVICE_NAME

If the service does not start, determine the reason by looking at the logs and messages:

journalctl -xe

Fix any issues and try again until the service starts.

Example:

systemctl restart postfix
# The service fails to start.
journalctl -xe
# The error message is “fatal: open lock file /var/lib/postfix/master.lock: cannot open file: Permission denied”
# Fix the obvious error.
rm /var/lib/postfix/master.lock
# Make sure there aren't other files that may have permissions or ownership issues in that directory.
ls -l /var/lib/postfix
# There are no other files.
# Try to start the service again.
systemctl start postfix
# No errors are reported.  The service is working! Lets double-check:
systemctl status postfix

You can check which services are in a failed stated by using the following command.

systemctl list-units --failed

Let’s say you reboot the system and want to make sure everything started ok.  Then run the above command and troubleshoot each service as needed.

Also, if you have good service monitoring in place, check there.  Your monitors should report if any service isn’t functioning appropriately and you can use this information to track down issues and fix them as needed.

A List of Directories that Are Not in the RPM Database

Here are some common places to look for files that live outside of an RPM package:

/var/log/SERVICE_NAME/  (Example: /var/log/httpd)
/var/lib/SERVICE_NAME/  (Example: /var/lib/pgsql)
/var/spool/SERVICE_NAME/  (Example: /var/spool/postfix)
/var/www
/usr/local
/run
/var/run/
/tmp
/var/tmp
/root
/home

Correcting Home Directory Ownership

If user home directories were changed do to a recursive chmod or chown command, then they need to be fixed.  If the ownership has changed, we can make an assumption that each home directory and all of its contents should be owned by the corresponding user.  For example, “/home/jason” should be owned by the “jason” user and any files in “/home/jason” should be owned by the “jason” user, too. Here’s a quick script to make this happen:

cd /home
for U in *
do
chown -R ${U} ${U}
done

Be careful with the chown command because we don’t want to create another mess!

It could be the case that some files in a given home directory should not be owned by the user.  If you think this might be the case, your best course of action is to restore the home directories from backups.  Speaking of which…

Why Not Just Restore from Backup?

If you have a good and recent backup, restoring that backup might be a great option.  If the server in question doesn’t actually store data, then it would be a perfect candidate for a restore as you won’t lose any data.

Performing a restore can give you the peace of mind that all the files on the system have the proper permissions and ownership.  After you’ve rigorously checked the services, the chances of any missed files causing operational issues is low. Nevertheless, there is a possibility of an issue arising at a later date.  A restore reduces this probability even further.

You could also use a hybrid approach where you run through the above process and selectively restore parts of the system.

The downside of performing a restore is that it can be slower that using the process outlined above.  It’s much quicker to change the permissions on a 1 TB file than it is to restore that file.

Of course, if you don’t have a backup that you can restore then you will have to follow a process like the one outlined above.

Marielle Price

Published in GNU/Linux Rules!

D4E 02 WaysOfSeeing ExploringArt 1920 v1.0In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, about the role of open source in innovation for telecommunications service providers.

 

Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of open source as the path to innovation resonates on many levels.  

In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, who gave a keynote address at last year’s event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers.

One reason for open source’s broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs.

“There are projects now, like Kubernetes, that are too big for any one company to do. There's technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.”

Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies.

Me: Why is open source central to innovation in general for telecommunications service providers?

Nadeau: The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed.

And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They're becoming much more flexible, more modular, and open source is the best means to achieve that.

Me: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.

Nadeau: Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today's marketplace. Without open source in that virtualization space, you’re stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete.

There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms.

NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came "disaggregated VMs" where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it's still NFV.

You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great.

But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we're back to square one where you lose 80% of the performance because of the latest SOA model that they've implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it's still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations.  

Me: Tell us about the underlying Linux in NFV, and why that combo is so powerful.

Nadeau: Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it's the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it's all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It's secure, it's flexible, and scalable, so operators can really use it as a tool now.

Me: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?

Nadeau: Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors’ businesses.

These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they're using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions.

 

Published in GNU/Linux Rules!

The conferences are an excellent introduction to understand the properties and benefits of block technology and digital currencies. Blockchain and cryptocurrencies are opening little by little in the lives of many people, since every day the number of companies that design products and services to take advantage of their properties grows. Specifically in the case of the Blockchain technology, studies carried out by Deloitte and PwC reveal that more and more organizations are integrating block networks in some way into their operating models, since these provide security, immutability and traceability to operations, in addition to reduce costs, processing times and eliminate the need for intermediaries. However, although this technology has gained much popularity among connoisseurs, there are still many people who do not know how Blockchain works. Therefore, below we present some TED talks that address these issues easily and simply for the public less familiar with these issues, which are an excellent introduction to begin to understand in more detail the benefits offered by block technology..

 

1-. ¿How Blockchain will radically transform the economy? The conference, led by Bettina Warburg -investigator and co-founder of Animal Ventures- explains Blockchain in a very clear and simple way for the less knowledgeable public, using terms, analogies and examples to illustrate how this technology increases transparency in value transfers.

 

Fuente:https://www.ted.com/talks/bettina_warburg_how_the_blockchain_will_radically_transform_the_economy

 

2-. ¿How Blockchain is changing money and business? Don Tapscott, author of the book "Blockchain Revolution" offers an excellent overview of Blockchain and illustrates five opportunities that can be exploited thanks to this technology. Tapscott addresses from how people can have control over their data, to the guarantees that this technology offers to content creators, allowing them to receive just compensation for their creations.

 

Fuente:https://www.ted.com/talks/don_tapscott_how_the_blockchain_is_changing_money_and_business

 

3-. Bitcoin Sweat. Tides Know the future of the main cryptocurrency During your conference, Paul Kemp-Robertson explains how cryptocurrencies are changing our conception of the economy, so he offers several examples of the benefits of these assets on banks and other financial institutions. Kemp-Robertson develops in detail the idea of ​​how obsolete it is to use paper money against the properties and advances that the digital age brings.

 

Fuente:https://www.ted.com/talks/paul_kemp_robertson_bitcoin_sweat_tide_meet_the_future_of_branded_currency

 

4-. We have stopped trusting institutions and we began to trust strangers Rachel Botsman explains in this conference how the Blockchain technology gains ground by not letting us place our trust in companies that mediate the processes, a radical change that has not yet been fully realized but that is gradually starting to be observed more frequently. Botsman illustrates how certain sectors have changed giving way to new platforms, such as AirBnb or Uber, where we no longer speak of centralized service providers, but of platforms that connect people with others that can meet certain needs, offering greater transparency and security .

 

Fuente:https://www.ted.com/talks/rachel_botsman_we_ve_stopped_trusting_institutions_and_started_trusting_strangers

 

5-. Blockchain and intermediaries This conference discusses how Blockchain reduces the need for intermediaries, since it guarantees the execution of these tasks traditionally carried out by third parties in a quicker, easier way and without leaving space for errors attributable to the management of third parties. The talk puts in perspective that it is not about whether Blockchain can replace intermediaries, but about when this will happen.

 

 

Fuente:https://www.ted.com/watch/ted-institute/ted-bcg/blockchain-and-the-middleman

 

6-. The future of money The last lecture by Neha Narula exposes some ideas about the future of money, ensuring that now this will be programmable, will be software driven and will flow in a secure manner. Digital currencies such as Bitcoin have put forward a first approximation and have shown that this is possible, but according to Narula, the next advances in this area will offer better properties, maintaining the faith that these technologies offer greater opportunities for people leaving aside the role of more traditional institutions.

Fuente:https://www.ted.com/talks/neha_narula_the_future_of_money Fuente: Enterprisersproject.com

Marielle Price

Published in Blockchain universe
Our website is protected by DMC Firewall!