Linux systemd dev says open source is ‘SICK’, kernel community ‘awful’

Lennart Poettering, creator of the systemd system management software for Linux, says the open-source world is “quite a sick place to be in.”

He also said the Linux development community is “awful” – and he pins the blame for that on Linux supremo Linus Torvalds.

“A fish rots from the head down,” Poettering said in a post to his Google+ feed on Sunday.

Poettering said Torvalds’ confrontational and often foul-mouthed management style is “not an efficient way to run a community” and that it sets an example that is followed by other kernel developers, creating a hostile environment for newcomers.

What’s more, he said, the kernel development community is insular and the overall tone of its discourse is likely to keep it that way.

“The Linux community is dominated by western, white, straight, males in their 30s and 40s these days,” Poettering wrote. “I perfectly fit in that pattern, and the rubbish they pour over me is awful. I can only imagine that it is much worse for members of minorities, or people from different cultural backgrounds, in particular ones where losing face is a major issue.”

Torvalds is indeed well known for his acerbic posts to Linux kernel mailing lists. Poettering cited one particular missive in which Torvalds said some kernel developers should be “retroactively aborted” for their stupidity, and in another post he said he hoped ARM system-on-chip (SoC) developers would “all die in some incredibly painful accident.”

The Linux main man has no great love for the core systemd developers, either. In April he called top systemd coder Kay Sievers a “fucking prima donna” and said he didn’t want to ever work with him.

In the past, Torvalds has explained away such outbursts, saying that being grumpy is just in his nature.

“I’d like to be a nice person and curse less and encourage people to grow rather than telling them they are idiots,” Torvalds said during an online chat with Finland’s Aalto University in April. “I’m sorry – I tried, it’s just not in me.”

But Poettering isn’t buying it. As a result of the behavior of Torvalds and a few other core kernel developers, he said, he hasn’t posted to the Linux kernel mailing list “in years” – although he added that the systemd development community is “fantastic.”

“If you are a newcomer to Linux, either grow a really thick skin. Or run away, it’s not a friendly place to be in,” Poettering wrote by way of advice. “It is sad that it is that way, but it certainly is.”

Click here for full Story

DDoS attacks rally Linux servers

A significant string of distributed denial-of-service (DDoS) campaigns during the second quarter of 2014 were driven by Linux web servers that were compromised and infected by IptabLes and IptabLex malware, according to a threat advisory from Akamai’s Prolexic Security Engineering & Research Team (PLXsert).

“The .IptabLex/s threat is extensive,” Greg Lindor, lead malware analyst for the Akamai PLXsert team, told in Thursday email correspondence. “This threat is being used to take part in DDoS campaigns with significant size and reach.”

Researchers at PLXsert observed and measured the attacks, which exploited a number of vulnerabilities, including Apache Struts, Tomcat and Elasticsearch, on unmaintained servers. Stuart Scholly, senior vice president and general manager of Security Business Unit at Akamai, in a press release, urged Linux admins “to take action to protect their servers.”

Linux is not usually targeted in large scale DDoS attacks, Lindor noted. These types of actors typically seek “the route of least resistance…when building a botnet of significant size.”

Payloads called .IptabLes or IptabLex are located in the /boot directory and when rebooted run an .IptabLes binary. The infected system contacts a remote host via self-updating feature to download a file.

“As far as DDoS payloads, the .IptabLes threat is very similar to other DDoS related bot threats,” Lindor said. “What is new is the way this threat it being propagated and the targeted victims.”

Researchers said that in the lab, the infected server tried to contact two IP addresses in Asia. While the bulk of past DDoS bot infections have come out of Russia, more recently many have been found to originate from servers hosted in the U.S. But the command-and-control centers (C2, CC) for the two payloads are located in Asia.

Researchers at PLXsert believe that the DDoS botnet will expand and cause further infestation.

Troubling to researchers is the targeting of Linux servers. Linux is not usually targeted in large scale DDoS attacks. These types of actors typically seek “the route of least resistance…when building a botnet of significant size,” Lindor said.

Noting that “Linux is usually considered the Operating System of choice to build systems running many of our web related services,” Lindor said that “the focus on targeting Linux systems to perform large scale DDoS attacks is relatively new and uncharted territory for these types of actors.” After unrestricted access is gained “by any means the whole system is considered compromised,” he said. Simply hardening operating systems, then, “is no longer sufficient,” but rather “correctly configuring the services being [run] on these platforms is the first line of defense against threats like .IptabLes/x.”

Click here for full Story

Red Hat to ditch MySQL for MariaDB in RHEL 7

In a surprise move, Red Hat has announced that version 7 of Red Hat Enterprise Linux (RHEL) will ship with the MariaDB database installed by default, in place of MySQL.

The announcement was made at the company’s Red Hat Summit, which wrapped up in Boston on Friday.

MariaDB is a fork of MySQL that was launched in 2009 by original MySQL coder Ulf Michael “Monty” Widenius. It’s meant to be a drop-in replacement, meaning any application that runs on MySQL should run unmodified on the MariaDB server. MariaDB does have one important characteristic that MySQL doesn’t share, however: MariaDB isn’t owned by Oracle.

Oracle acquired MySQL as part of its 2009 purchase of Sun Microsystems and almost immediately began tightening the reins, much to the consternation of MySQL’s fans. Support options were cut, and Oracle shifted to an “open core” development model in which the open source database server is sold alongside expensive, proprietary add-ons.

Widenius founded MariaDB largely as a reaction against these unwanted changes, and the project has steadily been gaining converts among the MySQL user community.

A number of popular community-driven Linux distributions have already begun shipping MariaDB in place of MySQL by default, including Arch Linux, OpenSuse, and Slackware. But for RHEL to do so is quite a coup indeed, and somewhat unexpected.

In May, the Fedora Project shipped a beta of Fedora 19 with MariaDB installed by default. But although Fedora is technically the upstream distribution for RHEL, and RHEL 7 will be based on Fedora 19, the actual software bundles that ship with the two distributions often differ significantly.

What’s more, last week Red Hat shipped the first beta of Red Hat Software Collections, an officially supported bundle of databases and programming languages for RHEL. But while that offering includes MariaDB, it also comes with MySQL and PostgreSQL, and Red Hat offers no preference among the three.

There is at least one good reason why Red Hat might be itching to move away from MySQL, though – namely, that there’s no love lost between Red Hat and Oracle, particularly since the database giant began offering Oracle Linux, a clone of the RHEL code base that lets Oracle keep all the money.

At Red Hat Summit, senior engineering manager Radek Vokál said that Red Hat also expected it would be easier to contribute certain patches and features to MariaDB than to MySQL. Apparently, Oracle has not been particularly amenable.

Vokál said that “some versions” of RHEL 7 will still ship with MySQL, but that MariaDB would be “the main thing” from now on.

The Reg reached out to Red Hat for further clarification, but a spokesperson did not respond by the time we pushed the big, red “Publish” button.

RHEL 7 is expected to ship with MariaDB sometime in the second half of 2013.

Click here for full Story

Linux Top 3: Raspberry Pi B+, CentOS 7 and RHEL 5.11

1) Raspberry Pi B+

Few devices have captured the imagination of amateur computer hobbyists like the Raspberry Pi have in recent year. The promise of a small device that is flexible and Linux powered to do anything that a developer can imagine is one that many people have embraced.

While the Linux piece of the Raspberry Pi is about software, hardware does matter and the hardware is now getting an update.

The new Raspberry Pi Model B+ is an improvement on the existing B model and uses the same BCM2835 application processor. One of the most noticeable difference on the new device is the integration of 4x USB 2.0 ports instead of only two. Overall power consumption has been reduced by nearly a 1 watt and the board is now a neater form factor as well. There is now also a Micro-SD card slot, replacing the previous SD card slot.

Raspberry Pi Founder Eben Upton wrote:

“In the two years since we launched the current Raspberry Pi Model B, we’ve often talked about our intention to do one more hardware revision to incorporate the numerous small improvements people have been asking for. This isn’t a “Raspberry Pi 2?, but rather the final evolution of the original Raspberry Pi.

2) CentOS 7

Barely a month after Red Hat Enterprise LInux 7 was released, CentoOS 7 became available last week. The speed with which CentOS 7 has been released is particularly noteworthy. When RHEL 6 debuted in November of 2011, the CentOS project was unable to put out their corresponding CentOS 6 release until nine months later in July of 2012.

CentOS 7 is also the first release from the CentOS community since it officially partnered with Red Hat in January of 2014.

3) RHEL 5.11

With RHEL 7 now generally available, Red Hat is gearing down its production releases of RHEL 5.  On July 9, Red Hat announced the final RHEL 5 beta.

“This release continues to provide system administrators with a secure, stable, and reliable platform for their organization’s enterprise applications,” Red Hat stated. “While primarily focused on improving security and stability, Red Hat Enterprise Linux 5.11 Beta provides additional enhancements to subscription management, debugging capabilities, and more.”

Click here for full Story

Red Hat Enterprise Linux Openstack 7 reaches general release

RED HAT HAS ANNOUNCED the release of Red Hat Enterprise Linux Openstack Platform 5 (RHELOP5).

The new version is the latest iteration of Openstack Icehouse, aimed at allowing service providers including telcos, ISPs and cloud providers to spin up Openstack powered clouds.

Introduced as a beta back in June, RHELOP5 will have a three-year lifecycle and support from over 250 Openstack partners.

VMware infrastructure is integrated for virtualisation management, storage and networking, with seamless integration from existing VMware vSphere resources to drive virtualised nodes, all controlled from the Openstack Dashboard.

RHELOP5 includes improved support for virtual machines with new cryptographic security using the para-virtualised random number generator introduced in Red Hat Enterprise Linux (RHEL7).

Also new is improved workload management across available cloud resources with server groups spread across the cloud, producing lower communication latency and improved performance.

Radhesh Balakrishnan, Red Hat GM of Virtualisation and Openstack, said, “We see momentum behind Openstack as a private cloud platform of choice from enterprise customers and service providers alike.

“Red Hat Enterprise Linux Openstack Platform 5 not only offers a production-ready, supported version of Openstack Icehouse, but it brings a number of features that will simplify its use, and enhance dependability for enterprise users.

“Alongside those new features, we’re extending our support lifecycle for Red Hat Enterprise Linux Openstack Platform, giving users confidence that the solution they deploy will be supported by our global team for the next three years.”

RHELOP5 for Red Hat Enterprise Linux 7 (RHEL7) is available now with support for RHEL6 to follow in the coming weeks.

Click here for full Story

CoreOS Announces Managed Linux, World’s First “OS-as-a-Service”

CoreOS, the lightweight Linux distribution optimized for massive server deployments, on Monday introduced a new service called Managed Linux that its developers describe as the world’s first OS-as-a-Service.

The announcement of the monthly subscription service comes on the heels of Series A funding in which the commercial entity that distributes CoreOS raised $8.5 million from Kleiner Perkins Caufield & Byers, Sequoia Capital and Fuel Capital.

“The big announcement today is we are offering some of our first commercial products beyond CoreOS,” Alex Polvi, CoreOS chief executive and co-founder, told CRN.

[Related: Google Offers CoreOS On Compute Engine Cloud]

Managed Linux is a monthly subscription service that offers updates and patches to CoreOS servers, often running as large clusters, through a tool called FastPatch. Subscribers also get CoreUpdate, a control panel and set of APIs for managing those rolling updates themselves.

CoreOS also integrates Docker 1.0, a popular package for deploying and managing the Linux containers that isolate applications.

“The novel thing about how we are delivering this is that it’s very much like Software-as-a-Service,” Polvi said.

“You get that rolling stream of continuous updates and patches. You’re always running the latest version,” he told CRN.

Managed Linux eliminates the inconvenience, common with other Linux distributions, of performing migrations when new versions of the OS are released, according to Polvi.

While CoreOS is an open-source operating system, the paid service will benefit enterprises that want to hire a company that will be accountable for the process of ensuring they are running the most stable and secure version, he said.

“The technology behind CoreOS is game-changing,” Mike Abbott, a partner at Kleiner Perkins Caufield & Byers, said in a statement.

“CoreOS is solving infrastructure problems that have plagued the space for years with an operating system that not only automatically updates and patches servers with the latest software, but also provides less downtime, furthering the security and resilience of Internet architecture,” Abbott said.

Managed Linux is available for CoreOS deployments on multiple platforms. The operating system works exactly the same on bare metal servers as it does when hosted on several public clouds, including Amazon, Google and Rackspace.

Polvi said CoreOS is probably most utilized by companies running big deployments on their own on-premises hardware, the typical customers buying Linux subscriptions today.

Because CoreOS is a relatively new project, it’s still distributing through a direct channel. However, that is expected to change down the road, according to Polvi.
“We’re just getting rolling with system integrators. Haven’t fired up a whole channel program yet, but I think it’s inevitable,” he told CRN.
Polvi believes CoreOS, aided by innovative technologies like Docker, is a leader in the larger trend toward warehouse-scale computing, a concept laid out a few years back in an influential research paper from Google.
The theory goes that datacenters will come to behave as individual computers, with massive clusters of servers connected by high-speed networks working in concert to fulfill the computing requirements of the future.

Click here for full Story

Red Hat Buys French OpenStack Service Provider eNovance

Red Hat, a provider of Linux and other open-source solutions, has entered into an agreement to buy a French provider of open-source cloud computing services, eNovance. The move increases Red Hat’s position inenterprise  cloud solutions for the open-source cloud platform OpenStack.

eNovance, founded in 2008, assists service providers and large companies in building and deploying cloud infrastructures, and works with organizations to manage customer  Web applications. It has more than 150 global customers, and is one of the contributors to the OpenStack project. Red Hat is the top contributor to the last two distributions of OpenStack, according to the company.

“Red Hat is all in on OpenStack,” the company says on its Web site. It has been touting its Enterprise Linux OpenStack Platform, which, it says, provides “all the benefits you’ve come to expect from Red Hat Enterprise Linux, plus the fastest-growing cloudinfrastructure  platform from OpenStack.” Its OpenStack Cloud Infrastructure Partner Network is, according to Red Hat, the world’s largest commercial OpenStack ecosystem.

‘World Class’ Provider

The price for the acquisition is 50 million euros in cash and 20 million euros in Red Hat common stock, totaling about $95 million. eNovance has offices in Paris, Montreal and Bangalore, India, and Red Hat is based in Raleigh, North Carolina.

Arun Oberoi, Red Hat executive vice president, said in a statement that eNovance was “a world-class cloud computing services provider with a proven track record of successful global deployments.”

For its part, eNovance co-founder and CEO Raphael Ferreira told news media that both companies understand “the transformative power  OpenStack can have on the enterprise market when it is both deployed and integrated in the right fashion.”

Both companies have worked together since last year to provide OpenStack implementation and integration to joint customers, and in May they announced an expanded collaboration to foster Network Functions Virtualization and telecommunications innovations in OpenStack.

‘Coalition Building’

Roger Kay, an analyst with industry research firm Endpoint Technologies Associates, told us that the acquisition is “part of the coalition building that’s going on right now as large enterprises shift to a hybrid/cloud model.” OpenStack, he said, is welcomed by IT departments that don’t want to deal with another multi-standard environment.

Earlier this week, Red Hat announced the global availability of its Enterprise Virtualization 3.4, which provided enhancements forvirtualization  infrastructure and guest support in Red Hat Enterprise Linux 7. Version 3.4 also offered tech previews of OpenStack features, including the importation of a Glance image as a template to provision a new virtual machine.

The company also recently partnered with Cisco and a financial group based in Durham, North Carolina, to create a $26 million venture fund, the Bull City Venture Partners, to support North Carolina-based endeavors. Other participants included Blue Cross Blue Shield of North Carolina and Capitol Broadcasting Company.

Click here for full Story

Red Hat Enterprise Linux 7 Release Candidate now becomes publicly available

Red Hat announced Wednesday that Red Hat Enterprise Linux 7 Release Candidate (RC) is now publicly available for testing.

A pre-release build of Red Hat Enterprise Linux 7, Red Hat Enterprise Linux 7 RC offers a near-final look at Red Hat’s operating system crafted for the open hybrid cloud, building upon the feedback collected during the beta program for Red Hat Enterprise Linux 7.

Last week, Red Hat announced availability of Red Hat Enterprise Linux 7 RC, which retains key capabilities that have made Red Hat Enterprise Linux synonymous with security and stability in the open enterprise world, like SELinux, while providing flexibility and agility required in an operating system to tackle the challenges of modern infrastructure and next-generation computing.

Red Hat Enterprise Linux 7 RC runs applications in isolated and secure lightweight containers utilizing SELinux and resource management, enables users to configure, monitor and manage services and system-wide resources with systemd and OpenLMI management infrastructure. With the default XFS file system, you can scale to 500TB and experience additional file-system enhancements with ext4, parallel NFS, GFS2, NFS v4, and Btrfs.

It also identifies and optimizes difficult application performance problems with improved tools like Tuna, SystemTap, Performance Co-pilot, and Thermostat, and achieves faster and more responsive networking performance with support for 40Gb Ethernet links and TCP improvements like Fast Open and Early Retransmit. Red Hat Enterprise Linux 7 RC improves the desktop experience with the new GNOME 3 desktop, upgraded Network Manager, and WiGig wireless support.

Vital for helping Red Hat’s strategic partners facilitate full certification of their applications and systems with Red Hat Enterprise Linux 7, Red Hat Enterprise Linux 7 RC is now accessible to all interested parties, from end users to enterprises, seeking to gain insight into how Red Hat redefines the enterprise operating system.

Red Hat Enterprise Linux 7 RC includes expanded Windows interoperability capabilities, including integration with Microsoft Active Directory domains, significant file system enhancements, including XFS as the default, scaling to support file systems up to 500 TB, improved subsystem management through OpenLMI, and virtual machine (VM) migration from Red Hat Enterprise Linux 6 hosts to Red Hat Enterprise Linux 7 hosts without downtime or VM modification.

Click here for full story

Facebook, Google, Intel, Microsoft, NetApp, Qualcomm, VMware And The Linux Foundation Form New Initiative To Prevent The Next Heartbleed

The OpenSSL Heartbleed disaster definitely opened up many people’s eyes to how underfunded and understaffed many of the open source projects the web relies on are. To prevent the next Heartbleed, Facebook, Google, Intel, Microsoft, NetApp, Qualcomm, VMware and The Linux Foundation today announced the “Core Infrastructure Initiative.” This initiative will fund and support important open source projects “that are in need of assistance.”

While it’s not clear how much money each of the participants is contributing, the Linux Foundation — which organized this program — says this is a “multi-million dollar project” and should be seen as the industry’s collective response to the Heartbleed crisis. The Linux Foundation will administer the initiative’s funds.

Unsurprisingly, the OpenSSL project will be the first to receive fellowship funding from the initiative. The idea behind the fellowships is to allow key developers to work on these projects full-time. Besides the funding, the projects that will receive support from the initiative will also get other forms of assistance to improve their security, including outside reviews, security audits, computing and test infrastructure, travel and other support.

Considering the importance of a project like OpenSSL, it is indeed somewhat shameful that it only received about $2,000 per year in donations. Money alone, of course, may not have been enough to help catch the Heartbleed bug, so it’s good to see that the participating companies are also dedicating test resources to this project.

“Just as The Linux Foundation has funded Linus Torvalds to be able to focus 100% on Linux development, we will now be able to support additional developers and maintainers to work full-time supporting other essential open source projects,” said Jim Zemline, the executive director of the Linux Foundation in a statement today.

The idea behind open source, of course, is to get as many people as possible to produce high-quality code that is also secure. Many of the projects we rely on day in and day out, however, have grown so complex that having only a few part-time developers working on them isn’t enough to ensure their quality and security. The Linux Foundation acknowledges as much today.

“The most recent Coverity Open Scan study of software quality has shown that open source code quality surpasses proprietary code quality. But as all software has grown in complexity – with interoperability between highly complex systems now the standard – the needs for developer support has grown.”

Looking ahead, the Core Infrastructure Initiative plans to move away from what is clearly a reactive post-crisis mode to a more proactive mode. Going forward, the initiative will focus more strongly on proactive reviews that identify the need of the most important projects — hopefully before the next Heartbleed crisis hits.

Click here for full story

Linux Kernel Panel: What’s what with Linux today

Summary: Some of Linux’s best and brightest kernel developers talk about the state of Linux development today.

Napa Valley, CA: At an exclusive gathering at the Linux Collaboration Summit, some of the crème de la crème of Linux developers talked about what’s going on with the Linux kernel today.

This year, an elite Linux kernel hacker panel was made up of Red Hat’s Dave Chinner; SUSE’s Mel Gorman; Facebook’s Jen Axboe; Nebula’s Matthew Garrett; The Linux Foundation’s own Greg Kroah-Hartman; and, as usual, moderator Jon Corbet, co-founder of the top hardcore Linux news site,

The panel opened with Corbet pointing out that today “Almost all the people who work on the kernel are paid to do it. Only 10 percent to 20 percent are volunteers. What do your companies expect to get from your kernel work?”

Garrett replied, “Cynically we can’t depend on people to do the things we need to get done. I’ve written some code that helps our needs and then that helps other people. By showing that you’re really happy to help other people, they’re happy to help you. I help people, they help me.” The others agreed.
What’s Hot on ZDNet

Linux Kernel Panel: What’s what with Linux today
Hackonomics: Stolen Twitter accounts ‘more valuable’ than credit cards
AMD’s biggest challenge to date? Cutting global datacenters
Enterprise Mobility Suite expected with Office for iPad

Corbett then asked Axboe what Facebook got from paying for Linux kernel development. He replied “As we are shown with Open Compute, Facebook’s open-sourcing of data-center technology, “It makes economic sense to develop with open source. Facebook probably saves a billion dollars in development costs alone.”

Gorman added that with open source development, you’re spreading around the risk of development. “If we were just working on our own stuff, it would be like working in an echo-chamber.”

Next, Corbet observed that “Many of us work for companies that don’t like each other. While some company agendas appear in Linux development, it’s not bad. How do we do that?”

Kroah-Hartman replied that “Competitors rely on each other to survive. Marketing and suits can fight, but they know and we [the developers] know that we’re in this together.” Axboe explained, “Our work runs across companies.”

Besides, Garrett added, “These people are my friends. We like each other.” To which, Chinner replied “We’re working on a common goal. No single company has the expertise to do the job.”

That’s not to say that everyone loves everyone else in open-source circles. Far from it! Corbet noted that some user-space developers—that is, independent software vendors (ISV)s who build programs on top of Linux—are sometimes very angry with the Linux kernel developers.

Recently, for example, the PostgreSQL community was mad at the Linux kernel devlopers for making changes that made their database management system run software slower.

Chinner replied that “A lot of people watch we do, but we don’t see what they’re doing until we break something of theirs. I treat this problem by telling them to tell me about their workloads and what they do. If I don’t understand what you’re doing, I can’t help you. ISVs need to collabrate. This needs to be a two-way street.”

It doesn’t have to be that way. Kroah-Hartman said he hears about problems like this “all the time. We are very visible. It’s easy to find us. Our e-mail addresses are out there. Tell us your problems and we’ll see what we can do.”

In the case of PostgreSQL, for example, Gorman commented, “We [the Linux kernel developers] felt that Postgres was fine. 350 messages later I knew people weren’t happy. Part of that is our problem. We need to be more open and invite them to talk to us.”

Some programmers, both inside and outside Linux kernel development circles, also have problems with new Linux additions and features. In particular, Corbett mentioned that lots of people still hate cGroups. CGroups is a Linux kernel feature used to limit, account and isolate process CPU, I/O, system memory, and other resources.

Kroah-Hartman replied that “We’re way past what Unix could do. We blaze new trails.” Gorman acknowledged this, but “Some changes aren’t helpful. We shouldn’t bolt on new stuff to fix old problems.” An example of that, in his opinion, is cGroups.

Chinner added that things change and it’s impossible to predict how what seemed like a good idea at the time for one case would end up not being right for other cases years down the road. “With cGroups, it was meant for High-Performance Computing (HPC). Today we have completely different use cases.”

Besides, Chinner continued, “We don’t get things right the first time. We don’t know how tech will be used five, ten years down the track. Sometimes we get it wrong.”

For example, Kroah-Hartman said, “There was recently a nasty USB 3 bug in the latest test Linux kernels. We broke a lot of people’s machines. We fixed it. That was my mistake.” All the developers agreed that more testing, particularly automated testing—not just eyeballs—is needed to catch more bugs.

That said, Garrett pointed out, “Linux is the biggest open source project in the history of the world and it has minimal management. It’s a miracle it works as well as it does.” Besides, Kroah-Hartman added, “We still do it better [build operating systems] than anyone else. We fail less. Give us credit.”

Click here for full story

Linux snapshot: Pay rates and employers with the most job ads

Linux experts get higher pay checks, better opportunities as their skills are still hard-to-find

Computerworld – If paychecks are any kind of a measure, then people with Linux skills are doing better than most.

The national median annual IT salary is $91,050, or $43.77 per hour, while the national medium annual salary for Linux-certified information technology professionals is $96,750, or $46.51 an hour, according to Yoh Services, a staffing firm that produces its own wage index. The indexes generally focus on temporary wages.

At the request of Computerworld, Yoh gathered data on permanent jobs in Linux-specific occupations that was compiled by Wanted Technologies, which does labor market analysis.

Yoh searched for IT jobs requiring knowledge in Linux with optional skills that included Java, quality assurance (QA), the Microsoft .Net Framework (.Net), Oracle WebLogic, Hypertext Preprocessor (PHP), extract, transform, load software (ETL) and/or Oracle.

When it looked at the national jobs database, a search for IT jobs with Linux knowledge as a requirement revealed the top three positions are Java developer, with 12,300 jobs; systems engineer, 7,400 jobs, and senior software engineer, at 6,850 jobs.

“Android and iOS developers are almost becoming a dime a dozen,” said Joel Capperella, vice president of marketing at Yoh. “Linux is still a unique skill; not everybody owns it.”

People in their twenties coming out of school “think they are going to make big dollars working for a start-up or developing Web apps, and all of a sudden they find, ‘holy smokes, I’m one of thousands,'” said Capperella.

In terms of employers, Yoh’s research found that there are 8,000 employers currently hiring for IT jobs with Linux requirements. The companies with the most Linux-related job postings are:, 2,356 jobs; Lockheed Martin, 713; Dell, 679; Northrop Grumman, 569; and Computer Sciences Corporation, 535.

Amazon posted 16,100 IT jobs last year, the most by any one firm.

The Northeast leads in demand for people with Linux skills, Yoh found.

Yoh keeps track of wages in the temporary IT market in its Yoh Index, which is has published since 2001. This index covers a spectrum of IT jobs, and found is that the wages of degree-holding workers and certified technical professionals holding temporary jobs saw their wages rise by an average of approximately 4.4% in 2013. The increase might have been greater had it not been for the impact of sequestration, Yoh said.

Click here for full story

How OpenStack parallels the adoption of Linux

In spite of its considerable momentum, there are still skeptics about whether OpenStack will ultimately succeed. My colleague tackled some of that skepticism in a blog post last year and I’m not going to rehash those arguments here. Rather, I’m going to make some observations about how OpenStack is paralleling, and will likely continue to parallel, the adoption of another open source project that I think we can all agree has become popular and successful—namely Linux. [1]

1. Part and parcel of a new approach to computing

Linux came about at a time when computing was changing. It had become distributed and the rise of the web was leading to new functions and new requirements. Much of Linux’ early-on growth came from powering new Internet infrastructure. It was from that beachhead that Linux branched out into more traditional enterprise operating system roles. Similarly, OpenStack is part of the cloud computing wave which is characterized by new levels of standardization and automation combined with an on-demand and self-service approach to delivering computing resources to users. 

2. Adoption rates won’t be uniform

Linux early adopters were often Internet hosting providers and other technically savvy technology consumers. Early OpenStack adopters fit a similar profile. In fact, the OpenStack project was originally founded by NASA and Rackspace, a hosting provider. Other early users of the technology include organizations such as financial services firms seeking to bring public cloud computing benefits into their own datacenters for a more flexible infrastructure that remains fully under their control. Mainstream enterprise adoption, especially for workloads that aren’t cloud enabled, will follow over time. 

3. It takes time

And, in general, adoption of new technologies always takes place over years. Depending upon how you count, significant Linux adoption by mainstream enterprises took up to a decade from its inception. Many considered the Linux 2.4 kernel to be the first one that was “enterprise ready” (whether or not they were able to define what they meant by that term) and that didn’t appear in commercial Linux distributions until about 2001—well after Linux was already in widespread use for Internet infrastructure.

That’s not to say OpenStack’s timeline will be so extended. Today, open source software is widely accepted within enterprises in a way that wasn’t the case c. 2001. But no technology gets adopted overnight. (Even x86 virtualization took perhaps five years to become truly widespread.) 

4. About community as much as technology

Early Linux success didn’t come about because it was better technology than Unix. For the most part it wasn’t. Rather it often won because it was less expensive than proprietary Unix running on proprietary hardware. It also gave users a choice of both distributions and hardware vendors as well as the ability to customize the code should they so choose. However, what has truly distinguished Linux and open source broadly over time is the power of the open source development models and the innovation that comes from communities around projects.

Today, across major areas of the market such as infrastructures for handling high volume data, open source technologies are behind most of the ongoing rapid change. That’s the case with OpenStack as well. There are other cloud infrastructure projects—some of which arguably have a head-start in commercial deployments. But it’s OpenStack that’s garnering the most industry attention because OpenStack has the biggest and most diverse community. 

5. Open source development is an incremental process

One of the knocks one hears about OpenStack is that it’s not mature. It’s not. And indeed this is a common refrain about many early-stage open source projects. Of course, early versions of proprietary products aren’t necessarily mature either. But, usually, the company developing proprietary software has at least made an effort to release something that’s complete and functional.

Open source, on the other hand, is a much more iterative process beginning with early code that is not only immature but which has clear functional gaps. This was the case with Linux that began life as essentially a hobbyist operating system before evolving into something appropriate for Internet infrastructure and finally into an operating system capable of handling the most demanding enterprise workloads. OpenStack will follow a similar trajectory. 

6. Commercial distributions make consumption by businesses possible

One of the important steps that needed to happen in order for Linux to be accepted into mainstream enterprises was that it had to be made available as a commercial product. Most enterprises aren’t interested in consuming open source projects—especially for production workloads. They want products, which is to say bits that are thoroughly hardened, tested, documented, and supported. They want ecosystems around those products including whatever certifications are required.

Likewise with OpenStack, some early adopters are working directly with and even contributing to the OpenStack project but most enterprises are looking for a OpenStack product. 

7. Need for complementary components and integration

Customers don’t buy infrastructure for the sake of buying infrastructure. An obvious statement perhaps but one that nonetheless sometimes seems to be forgotten. Linux succeeded because it became a great platform on which to run everything from networking services to line-of-business applications. Linux distributions include many of the open source components needed to build highly functional infrastructure; the Apache Web Server was an important early-on component. But the availability over time of additional software needed by enterprises, including proprietary software, is what made Linux an integral part of the software stack at so many organizations.

Similarly, OpenStack will increasingly include many of the components needed to build out the Infrastructure-as-a-Service (IaaS) layer. However, complementary products such as cloud management platforms, application lifecycle management, and Platform-as-a-Service (PaaS) are needed to build and manage a complete hybrid cloud. And, of course, that cloud also needs an operating system to support the applications running in the cloud—a role for which Linux is ideally suited. 


One thing is much different from the early days of Linux adoption and today’s OpenStack. The environment is much changed. Then, open source was still a new concept to many. Major proprietary software vendors did their best to convince customers that open source was somehow riskier than their own products. Good open source project governance, licensing, and development practices were being learned, often by trial and error.

Today, as can be seen in the pace of OpenStack’s advance, the milieu is vastly different. Open source software is ubiquitous and it’s widely recognized that open, collaborative approaches are often just a better way to develop software. One need only look at the membership of the OpenStack Foundation to see just how many major IT vendors and how many individuals recognize this to be the case.

Click here for full story