Andromeda Computer - Blog

Deepin OS is one of the most modern-looking Linux distros. If you are a fan of sleek design and at the same time easy-to-use distro, then get your hands on Deepin. It is also extremely easy to install. I am sure you’ll love it.

The team has developed its own desktop environment based on Qt and also uses KDE plasma’s window manager aka. dde-kwin. Deepin team has also developed 30 native applications for users to make day-to-day tasks easier to complete.

Some of the native deepin applications are — Deepin installer, Deepin file manager, Deepin system monitor, Deepin Store, Deepin screen recorder, Deepin cloud print, and so on… If you ever run out of options, do not forget thousands of open source applications are also available in the store.

The development of Deepin started in 2004 under the name ‘Hiwix’ and it’s been active since then. The distro’s name was changed multiple times but the motto remained the same, provide a stable operating system which is easy to install and use.

The current version Deepin OS 15.11 is based on Debian stable branch. It was released on 19, July 2019 with some great features and many improvements and bug fixes.





Cloud sync

The most notable feature in this release is Cloud sync. This feature is useful if you have multiple machines running Deepin or you have to reset your deepin installation more often than others. The distro will keep your system settings in sync with cloud storage as soon as you sign on. In case the installation is reset, the settings can be quickly imported from the cloud. This feature will sync all the system settings such as themes, sound settings, update settings, wallpaper, dock, power settings, etc. Unfortunately, the cloud sync feature is only available for users with Deepin ID in mainland china.

They are testing the feature and will be releasing it soon for the rest of the Deepin users. There are other user-friendly focused Linux distributions which should develop this feature. The cloud sync is useful for new Linux users. They don’t have to set up everything from scratch if they mess things up with the current installation.




Deepin switched from dde-wm to dde-kwin in 15.10. ddep-kwin consumes less memory and provides a faster and better user experience. Deepin 15.11 brings more stability to dde-kwin. Deepin Store The deepin team has developed 30 native applications, among those is Deepin store. It lets you easily browse and install applications from the distro repositories. The new release ships with Deepin Store 5.3. The updated store app can now determine the user region based on Deepin ID’s location. Another option has been added to the deepin file manager for burning files to CD/DVD. Though CD/DVD is the thing of the past but still if somebody needs to burn data to it, it’s extremely easy to do it in Deepin.

To play media, the distro ships with Deepin movie application which now supports drag-n-drop load subtitle feature. Just drag the subtitle script and drop it on the player while the movie is playing. Besides these new features, there are more improvements and bug fixes in Deepin 15.11. For people finding out a beautiful, feature-rich and stable Linux distribution, Deepin can be the platform of your choice.

Published in GNU/Linux Rules!
Wednesday, 08 May 2019 23:04

Using rsync to back up your Linux system

Find out how to use rsync in a backup scenario.

Published in GNU/Linux Rules!

linux-16 (1).jpg


Linux has come a long way since 1991. These events mark its evolution.

1. Linus releases Linux

Linus Torvalds initially released Linux to the world in 1991 as a hobby. It didn't remain a hobby for long!



2. Linux distributions

In 1993, several Linux distributions were founded, notably DebianRed Hat, and Slackware. These were important because they demonstrated Linux's gains in market acceptance and development that enabled it to survive the tumultuous OS wars, browser wars, and protocol wars of the 1990s. In contrast, many established, commercial, and proprietary products did not make it past the turn of the millennium!



3. IBM's big investment in Linux

In 2000, IBM announced it would invest US$1 billion dollars in Linux. In his CNN Money article about the investment, Richard Richtmyer wrote: "The announcement underscores Big Blue's commitment to Linux and marks significant progress in moving the alternative operating system into the mainstream commercial market."



4. Hollywood adopts Linux

In 2002, it seemed the entire Hollywood movie industry adopted Linux. DisneyDreamworks, and Industrial Light & Magic all began making movies with Linux that year.



5. Linux for national security

In 2003, another big moment came with the US government's acceptance of Linux. Red Hat Linux was awarded the Department of Defense Common Operating Environment (COE) certification. This is significant because the government—intelligence and military agencies in particular—have very strict requirements for computing systems to prevent attacks and support national security. This opened the door for other agencies to use Linux. Later that year, the National Weather Service announced it would replace outdated systems with new computers running Linux.



6. The systems I managed

This "moment" is really a collection of my personal experiences. As my career progressed in the 2000s, I discovered several types of systems and devices that I managed were all running Linux. Some of the places I found Linux were VMware ESX, F5 Big-IP, Check Point UTM Edge, Cisco ASA, and PIX. This made me realize that Linux was truly viable and here to stay.



7. Ubuntu

In 2004, Canonical was founded by Mark Shuttleworth to provide an easy-to-use Linux desktop—Ubuntu Linux—based on the Debian distribution. I think Ubuntu Linux helped to expand the desktop Linux install base. It put Linux in front of many more people, from casual home users to professional software developers.



8. Google Linux

Google released two operating systems based on the Linux kernel: the Android mobile operating system in mid-2008 and Chrome OS, running on a Chromebook, in 2011. Since then, millions of Android mobile phones and Chromebooks have been sold.



9. The cloud is Linux

In the past 10 years or so, cloud computing has gone from a grandiose vision of computing on the internet to a reinvention of how we use computers personally and professionally. The big players in the cloud space are built on Linux, including Amazon Web ServicesGoogle Cloud Services, and Linode. Even in cases where we aren't certain, such as Microsoft Azure, running Linux workloads is well supported.



10. My car runs Linux

And so will yours! Many automakers began introducing Linux a few years ago. This led to the formation of the collaborative open source project called Automotive Grade Linux. Major car makers, such as Toyota and Subaru, have joined together to develop Linux-based automotive entertainment, navigation, and engine-management systems.



Share your favorite


Author: Alan Formy-Duval

Marielle Price


Published in GNU/Linux Rules!


If you’ve come here looking to fix an errant recursive chmod or chown command on an RPM-based Linux system, then here is the quick solution. Run the following commands using root privileges:

rpm --setugids -a
rpm --setperms -a



 option to the rpm command sets user/group ownership of files in a given package.  By using the 


 option we’re telling rpm to do this on all the packages. The 


 option sets the permissions of files in the given package.

If this fixes your issue, great!  If not, or you want to be thorough, continue reading.

Why Would You Need To Fix the Permissions and User/Group Ownership of Files

The most common reason you’ll need to follow the procedure below is to recover from a chmod or chown command that didn’t do what you initially intended it to do.  Using this procedure can save you from having to perform a complete system restore or a complete system reinstall.

In any case, perhaps someone else accidentally executed a recursive chmod or chown command on part or even the entire file system.  Even if the mistake is noticed and the command is stopped by typing Control-C as quickly as possible, many files could have been changed in that short period of time and you won’t be able to immediately tell which files were changed.

Problems Caused by Incorrect Permissions and Ownerships of Files

Having improper file permissions or ownerships can cause processes and services to behave in unexpected ways, stop working immediately, or prevent them from starting once they’ve been stopped.

For example, if the user running the web server process can’t read the files it’s supposed to serve, then the service it provides is effectively broken.

If a service is already running, it probably doesn’t need to read its configuration file(s) again as it has that information in memory.  However, if it can’t read its configuration when it attempts to start, it simply isn’t going to start.

Also, when some services start, they create a lock file to indicate that the service is running.  When the service stops, it deletes the lock file. However, if the permissions on that lock file are changed while the service is running such that the service can’t delete the file, then of course the lock file won’t get deleted.  This will prevent the service from starting again as it thinks it’s already running due to the presence of the lock file.

Perhaps the file that actually needs to be executed no longer has execute permissions.  Needless to say, that will definitely keep a service from starting.

If you have a service such as a database that writes data, it needs the proper permissions to write data to file, create new files, and so on.

Those are some common issues you can run into when file permissions and ownerships are not set properly.

Examples of Errant chmod and chown Commands

A common way a chmod or chown command can go wrong is by using recursion while making a typing mistake or providing an incorrect path.  For example, let’s say you’ve created some configuration files in the /var/lib/pgsql directory as the root user. You want to make sure all those files are owned by the postgres user, so you intend to run this command:

chown -R postgres /var/lib/pgsql

However, you accidentally add a space between the leading forward slash and var, making the actual command executed this one:

chown -R postgres / var/lib/pgsql

Oh what a difference a space can make!  Now, every file on the system is owned by the postgres user.

The reason is because chown rightly interpreted the first forward slash ( “/” ) as an absolute path to operate upon and “var/lib/pgsql” as a relative path to operate on.  The chown command, and any Linux command for that matter, only does what you tell it to do. It can’t read your mind. It doesn’t know that you intended to only supply the one path of /var/lib/pgsql.

Fixing File Ownerships and Permissions with the RPM Command

Continuing with our example, you should be able to execute the following command with root privileges and return to a fairly stable state:

rpm --setugids -a

This command will return the owner and group membership for every file that was installed via an RPM package.  Changing the ownership of a file can cause the set-user-ID (SUID) or set-group-ID (GUID) permission bits to be cleared.  Because of this, we need to restore the permissions on the files as well:

rpm --setperms -a

Now every file that is known by rpm will have the same permissions as when it was initially installed.

By the way, use this same process to fix an errant chmod command, too.  Be sure to use the same order of the commands due the SUID and GUID issues that could arise.  IE, run rpm with the 


 options last.

Fixing File Ownerships and Permissions for Files Not Known by RPM

Not all the files on the system are going to be part of an RPM package.  Most data, either transient or permanent, will live outside of an RPM package.  Examples include temporary files, files used to store database data, lock files, web site files, some configuration files, and more depending on the system in question.

At least check the most important services that the system provides.  For example, if you are working on a database server, make sure the database service starts correctly.  If it’s a web server, make sure the web server service is functioning.

Here is the pattern:

systemctl restart SERVICE_NAME

If the service does not start, determine the reason by looking at the logs and messages:

journalctl -xe

Fix any issues and try again until the service starts.


systemctl restart postfix
# The service fails to start.
journalctl -xe
# The error message is “fatal: open lock file /var/lib/postfix/master.lock: cannot open file: Permission denied”
# Fix the obvious error.
rm /var/lib/postfix/master.lock
# Make sure there aren't other files that may have permissions or ownership issues in that directory.
ls -l /var/lib/postfix
# There are no other files.
# Try to start the service again.
systemctl start postfix
# No errors are reported.  The service is working! Lets double-check:
systemctl status postfix

You can check which services are in a failed stated by using the following command.

systemctl list-units --failed

Let’s say you reboot the system and want to make sure everything started ok.  Then run the above command and troubleshoot each service as needed.

Also, if you have good service monitoring in place, check there.  Your monitors should report if any service isn’t functioning appropriately and you can use this information to track down issues and fix them as needed.

A List of Directories that Are Not in the RPM Database

Here are some common places to look for files that live outside of an RPM package:

/var/log/SERVICE_NAME/  (Example: /var/log/httpd)
/var/lib/SERVICE_NAME/  (Example: /var/lib/pgsql)
/var/spool/SERVICE_NAME/  (Example: /var/spool/postfix)

Correcting Home Directory Ownership

If user home directories were changed do to a recursive chmod or chown command, then they need to be fixed.  If the ownership has changed, we can make an assumption that each home directory and all of its contents should be owned by the corresponding user.  For example, “/home/jason” should be owned by the “jason” user and any files in “/home/jason” should be owned by the “jason” user, too. Here’s a quick script to make this happen:

cd /home
for U in *
chown -R ${U} ${U}

Be careful with the chown command because we don’t want to create another mess!

It could be the case that some files in a given home directory should not be owned by the user.  If you think this might be the case, your best course of action is to restore the home directories from backups.  Speaking of which…

Why Not Just Restore from Backup?

If you have a good and recent backup, restoring that backup might be a great option.  If the server in question doesn’t actually store data, then it would be a perfect candidate for a restore as you won’t lose any data.

Performing a restore can give you the peace of mind that all the files on the system have the proper permissions and ownership.  After you’ve rigorously checked the services, the chances of any missed files causing operational issues is low. Nevertheless, there is a possibility of an issue arising at a later date.  A restore reduces this probability even further.

You could also use a hybrid approach where you run through the above process and selectively restore parts of the system.

The downside of performing a restore is that it can be slower that using the process outlined above.  It’s much quicker to change the permissions on a 1 TB file than it is to restore that file.

Of course, if you don’t have a backup that you can restore then you will have to follow a process like the one outlined above.

Marielle Price

Published in GNU/Linux Rules!

D4E 02 WaysOfSeeing ExploringArt 1920 v1.0In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, about the role of open source in innovation for telecommunications service providers.


Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of open source as the path to innovation resonates on many levels.  

In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, who gave a keynote address at last year’s event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers.

One reason for open source’s broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs.

“There are projects now, like Kubernetes, that are too big for any one company to do. There's technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.”

Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies.

Me: Why is open source central to innovation in general for telecommunications service providers?

Nadeau: The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed.

And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They're becoming much more flexible, more modular, and open source is the best means to achieve that.

Me: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.

Nadeau: Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today's marketplace. Without open source in that virtualization space, you’re stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete.

There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms.

NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came "disaggregated VMs" where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it's still NFV.

You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great.

But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we're back to square one where you lose 80% of the performance because of the latest SOA model that they've implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it's still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations.  

Me: Tell us about the underlying Linux in NFV, and why that combo is so powerful.

Nadeau: Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it's the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it's all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It's secure, it's flexible, and scalable, so operators can really use it as a tool now.

Me: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?

Nadeau: Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors’ businesses.

These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they're using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions.


Published in GNU/Linux Rules!

The conferences are an excellent introduction to understand the properties and benefits of block technology and digital currencies. Blockchain and cryptocurrencies are opening little by little in the lives of many people, since every day the number of companies that design products and services to take advantage of their properties grows. Specifically in the case of the Blockchain technology, studies carried out by Deloitte and PwC reveal that more and more organizations are integrating block networks in some way into their operating models, since these provide security, immutability and traceability to operations, in addition to reduce costs, processing times and eliminate the need for intermediaries. However, although this technology has gained much popularity among connoisseurs, there are still many people who do not know how Blockchain works. Therefore, below we present some TED talks that address these issues easily and simply for the public less familiar with these issues, which are an excellent introduction to begin to understand in more detail the benefits offered by block technology..


1-. ¿How Blockchain will radically transform the economy? The conference, led by Bettina Warburg -investigator and co-founder of Animal Ventures- explains Blockchain in a very clear and simple way for the less knowledgeable public, using terms, analogies and examples to illustrate how this technology increases transparency in value transfers.




2-. ¿How Blockchain is changing money and business? Don Tapscott, author of the book "Blockchain Revolution" offers an excellent overview of Blockchain and illustrates five opportunities that can be exploited thanks to this technology. Tapscott addresses from how people can have control over their data, to the guarantees that this technology offers to content creators, allowing them to receive just compensation for their creations.




3-. Bitcoin Sweat. Tides Know the future of the main cryptocurrency During your conference, Paul Kemp-Robertson explains how cryptocurrencies are changing our conception of the economy, so he offers several examples of the benefits of these assets on banks and other financial institutions. Kemp-Robertson develops in detail the idea of ​​how obsolete it is to use paper money against the properties and advances that the digital age brings.




4-. We have stopped trusting institutions and we began to trust strangers Rachel Botsman explains in this conference how the Blockchain technology gains ground by not letting us place our trust in companies that mediate the processes, a radical change that has not yet been fully realized but that is gradually starting to be observed more frequently. Botsman illustrates how certain sectors have changed giving way to new platforms, such as AirBnb or Uber, where we no longer speak of centralized service providers, but of platforms that connect people with others that can meet certain needs, offering greater transparency and security .




5-. Blockchain and intermediaries This conference discusses how Blockchain reduces the need for intermediaries, since it guarantees the execution of these tasks traditionally carried out by third parties in a quicker, easier way and without leaving space for errors attributable to the management of third parties. The talk puts in perspective that it is not about whether Blockchain can replace intermediaries, but about when this will happen.





6-. The future of money The last lecture by Neha Narula exposes some ideas about the future of money, ensuring that now this will be programmable, will be software driven and will flow in a secure manner. Digital currencies such as Bitcoin have put forward a first approximation and have shown that this is possible, but according to Narula, the next advances in this area will offer better properties, maintaining the faith that these technologies offer greater opportunities for people leaving aside the role of more traditional institutions.

Fuente: Fuente:

Marielle Price

Published in Blockchain universe
Our website is protected by DMC Firewall!