An open letter to entry-level IT employees

IT employees

The world of IT can be unforgiving, especially for those just starting out. The keys to finding and maintaining success are humility, patience and an open mind.

You’ve graduated and prepped, perhaps you’ve even collected some certifications, and you’re now ready to take that first step into the corporate IT world as an entry-level IT employee. These first steps can be a challenging, but not just from a technical standpoint. Continue reading “An open letter to entry-level IT employees”

Google Gets Serious about Chrome Security on Linux

google chromeGoogle was a bit slow in the beginning getting its Chrome browser ready for Linux. That’s now changing as Google is now set to take advantage of an advanced Linux kernel feature that could well make Chrome on Linux more secure than any other OS.

Chrome 23 dev-channel now takes advantage of the Seccomp-BPF feature that debuted in the recent Linux 3.5 kernel.

“Seccomp filtering provides a means for a process to specify a filter for incoming system calls,” kernel develop Will Drewry wrote in a mailing list message.

Google developer Julien Tinnes explainedthat,”with Seccomp-BPF, BPF programs can now be used to evaluate system call numbers and their parameters.”

In very basic terms, it means more control over the sandbox and less chance of escape for some kind arbitrary code execution.

Click here for full Story

Google Chrome 21 is out

google chromeGoogle Chrome is a browser that combines a minimal design with sophisticated technology to make the web faster, safer, and easier. Google Chrome version 21.0.1180.60 (21.0.1180.57 for Mac and Linux) is out, fixing 15 security vulnerabilities in the search giant’s browser. Strictly from a security perspective, you should upgrade as soon as possible and download the latest version of Chrome directly from

Download Latest Google Chrome

google Chrome

Security fixes and rewards:

Please see the Chromium security page for more detail. Note that the referenced bugs may be kept private until a majority of our users are up to date with the fix.

  • [Linux only] [125225] Medium CVE-2012-2846: Cross-process interference in renderers. Credit to Google Chrome Security Team (Julien Tinnes).
  • [127522] Low CVE-2012-2847: Missing re-prompt to user upon excessive downloads. Credit to Matt Austin of Aspect Security.
  • [127525] Medium CVE-2012-2848: Overly broad file access granted after drag+drop. Credit to Matt Austin of Aspect Security.
  • [128163] Low CVE-2012-2849: Off-by-one read in GIF decoder. Credit to Atte Kettunen of OUSPG.
  • [130251] [130592] [130611] [131068] [131237] [131252] [131621] [131690] [132860] Medium CVE-2012-2850: Various lower severity issues in the PDF viewer. Credit to Mateusz Jurczyk of Google Security Team, with contributions by Gynvael Coldwind of Google Security Team.
  • [132585] [132694] [132861] High CVE-2012-2851: Integer overflows in PDF viewer. Credit to Mateusz Jurczyk of Google Security Team, with contributions by Gynvael Coldwind of Google Security Team.
  • [134028] High CVE-2012-2852: Use-after-free with bad object linkage in PDF. Credit to Alexey Samsonov of Google.
  • [134101] Medium CVE-2012-2853: webRequest can interfere with the Chrome Web Store. Credit to Trev of Adblock.
  • [134519] Low CVE-2012-2854: Leak of pointer values to WebUI renderers. Credit to Nasko Oskov of the Chromium development community.
  • [134888] High CVE-2012-2855: Use-after-free in PDF viewer. Credit to Mateusz Jurczyk of Google Security Team, with contributions by Gynvael Coldwind of Google Security Team.
  • [134954] [135264] High CVE-2012-2856: Out-of-bounds writes in PDF viewer. Credit to Mateusz Jurczyk of Google Security Team, with contributions by Gynvael Coldwind of Google Security Team.
  • [$1000] [136235] High CVE-2012-2857: Use-after-free in CSS DOM. Credit to Arthur Gerkis.
  • [$1000] [136894] High CVE-2012-2858: Buffer overflow in WebP decoder. Credit to Jüri Aedla.
  • [Linux only] [137541] Critical CVE-2012-2859: Crash in tab handling. Credit to Jeff Roberts of Google Security Team.
  • [137671] Medium CVE-2012-2860: Out-of-bounds access when clicking in date picker. Credit to Chamal de Silva.

For Chrome 21, Google paid security researchers a grand total $2,000 in rewards as part of its bug bounty program. This payout is smaller than usual since Google found most of the vulnerabilities this time, using its own AddressSanitizer tool.

Still, Mountain View recently quintupled its maximum bug bounty to $20,000. The company has so far received about 800 qualifying vulnerability reports that span across the hundreds of Google-developed services, as well as the software written by 50 or so firms it has acquired. In just over a year, the program has paid out around $460,000 to roughly 200 individuals.

For the record, Google Chrome 20 was released just five weeks ago (and then updated again three weeks ago). At the time, I expected Chrome 21 to be released “sometime in August.” It turns out I was off by a day.

Click here for full Story

Insanity: Google Sends New Link Warnings, Then Says You Can Ignore Them

Google’s war on bad links officially became insane today. For months, Google’s sending out warnings about bad links and telling publishers they should act on those, lest they get penalized. Today, Google said the latest round of warnings sent out this week can be safely ignored. That’s not “more transparency” as Google posted. That’s more confusion.

It’s easiest to do the history first, to better understand the confusion caused by today’s post.

How We Got Here: Link Warnings Earlier This Year

Toward the end of March and in early April, Google began sending out warnings about “artificial” or “unnatural” links, such like this one:

Dear site owner or webmaster of….We’ve detected that some of your site’s pages may be using techniques that are outside Google’s Webmaster Guidelines.

Specifically, look for possibly artificial or unnatural links pointing to your site that could be intended to manipulate PageRank. Examples of unnatural linking could include buying links to pass PageRank or participating in link schemes.

We encourage you to make changes to your site so that it meets our quality guidelines. Once you’ve made these changes, please submit your site for reconsideration in Google’s search results.

If you find unnatural links to your site that you are unable to control or remove, please provide the details in your reconsideration request.

If you have any questions about how to resolve this issue, please see our Webmaster Help Forum for support.

Sincerely, Google Search Quality Team

There was some confusion about whether these messages meant that a site was actually penalized for having these links pointing at them or whether Google was just informing the sites but not really taking any negative action. Google’s response on this wasn’t clear:

Google has been able to trace and take action on many types of link networks; we recently decided to make that action more visible.In the past, some links might have been silently distrusted or might not have carried as much weight. More recently, we’ve been surfacing the fact that those links aren’t helping to improve ranking or indexing.

The Penguin Attacks

In late April, the Google Penguin Update went live. Designed to fight spam, it especially seemed to take action by either penalizing publishers who had participated in bad linking activities (as determined by Google’s) or discounting those links, so they no longer carried as much weight.

All hell broke loose in some quarters, especially among those who had been actively using link networks to boost their rankings in ways that went against Google’s guidelines. One of the suggested recovery options from Google was to remove bad links.

Google Advice: Get Rid Of Bad Links

But what if people couldn’t get links taken down? The head of Google’s web spam team, Matt Cutts, just generally suggested such a thing was possible without giving any specific advice.

This led further support to those who argued that “negative SEO” was now suddenly a real possibility, that any publisher could be targeted with “bad links” and made to plunge in Google’s rankings. Google stressed that negative SEO in this way is rare and hard. To this date, negative SEO still hasn’t seemed to be a wide-spread problem for the vast majority of publishers on the web.

Those reassurances — along with a Google help page update saying Google “works hard to prevent” negative SEO — hasn’t calmed some. Negative SEO has remained a rallying cry especially for many hit by Penguin (and many were deservedly hit) looking for a way to fight back against Google.

The New Link Building: Remove My Link Requests

But aside from the negative SEO sideshow, plenty of publishers tried to follow Google’s advice to get links removed. I’ve even had one come to me, from some publisher who was listed in our SearchCap daily newsletter in the past and wanted us to pull down a link. Insane. A link from a reputable site like ours is exactly what you want, and yet they wanted it removed.

The insanity has gotten even worse. We’ve had people threatening to sue to have links removed. We’ve covered that before. Boing Boing also covered another case today (without providing any of the background on how Google itself has fueled some of this craziness).

Today, we covered how some directories are now charging people to have links removed. Let’s be really clear on how topsy-turvey that means things have become. People have wanted links in the past and have been willing to pay for them (despite this being against Google’s rules). Now they’re perhaps willing to pay to have links taken down.

June: Google Says Don’t Ignore Link Warnings

But you’ve got to get those links removed, if you’ve gotten a warning message. After all, Google has said that. In June, at our SMX Advanced conference, Cutts said this about those link warnings:

You should pay attention. Typically your web site ranking will drop if you don’t take action after you get one of those notices.

Here’s the extended video clip on the topic:

But again, what to do if you can’t get links removed? Cutts said that Google might release a “disavow” tool. By the end of June, Bing even did launch such a link disavow tool — not that it helped with Google, of course. Those who had notices from Google about bad links pointing at them, notices they were supposed to take action on, still might not be able to get those links removed.

New Batch Of Warnings Goes Out

That leads to yesterday, when Google began sending out a new batch of link notices. Here’s an example of what one of those looks like:

Dear site owner or webmaster of….We’ve detected that some of your site’s pages may be using techniques that are outside Google’s Webmaster Guidelines.

Specifically, look for possibly artificial or unnatural links pointing to your site that could be intended to manipulate PageRank. Examples of unnatural linking could include buying links to pass PageRank or participating in link schemes.

We encourage you to make changes to your site so that it meets our quality guidelines. Once you’ve made these changes, please submit your site for reconsideration in Google’s search results.

If you find unnatural links to your site that you are unable to control or remove, please provide the details in your reconsideration request.

If you have any questions about how to resolve this issue, please see our Webmaster Help Forum for support.

Sincerely, Google Search Quality Team

Yes, that’s exactly the same content as what Google sent in late March. Nothing in the message gives the impression it can be ignored. It even encourages people who can’t get links removed to actively file a reconsideration request with Google.

July: Google Says You Can Ignore Link Warnings

But today, Cutts said this about the messages in a Google+ post:

If you received a message yesterday about unnatural links to your site, don’t panic. In the past, these messages were sent when we took action on a site as a whole.Yesterday, we took another step towards more transparency and began sending messages when we distrust some individual links to a site. While it’s possible for this to indicate potential spammy activity by the site, it can also have innocent reasons.

For example, we may take this kind of targeted action to distrust hacked links pointing to an innocent site. The innocent site will get the message as we move towards more transparency, but it’s not necessarily something that you automatically need to worry about.

If we’ve taken more severe action on your site, you’ll likely notice a drop in search traffic, which you can see in the “Search queries” feature Webmaster Tools for example.

As always, if you believe you have been affected by a manual spam action and your site no longer violates the Webmaster Guidelines, go ahead and file a reconsideration request. It’ll take some time for us to process the request, but you will receive a followup message confirming when we’ve processed it.

Like I said, this latest round of messages doesn’t seem to make things more transparent. The messages seem to be the same exact ones that Google previously told people they SHOULD worry about.

How About Just Saying If There’s A Real Concern

How do you know if you’re at risk if you get one of these new messages? Apparently, you also need to look at your traffic from Google and see if there’s a plunge. If so, you have a bad link problem. If not, well, you got a message that apparently can be ignored.

It would sure be much easier if the messages themselves said if action was really required or not. If there really was a penalty or not (in a world now where penalties that were penalties now might be “adjustments”).

That would be transparent. Instead, I predict this is all just going to cause greater confusion and panic, not more clarity and calmness.

It’s also yet another sign of how creaky the foundations or ranking sites based on links has become. It gets even more difficult these days to know what’s supposed to help or hurt. Links as votes suck.

Postscript: Google’s Matt Cutts commented below on Monday, July 23rd that the newer messages that can be safely ignored are now actually saying that:

An engineer worked over the weekend and starting with the messages that we sent out on Sunday, the messages are now different so that you can tell which type of situation you’re in. We also changed the UI in the webmaster console to remove the yellow caution sign for these newer messages. That reflects the fact that these newer notifications are much more targeted and don’t always require action by the site owner.

See also our follow-up story: Google Updates Link Warnings To (Sort Of) Clarify They Can Be Ignored (Maybe).

Read Full Article

5 things about FOSS Linux virtualization you may not know

In January I attended the 10th annual Southern California Linux Expo. In addition to speaking and running the Ubuntu booth, I had an opportunity to talk to other sysadmins about everything from selection of distribution to the latest in configuration management tools and virtualization technology.

I ended up in a conversation with a fellow sysadmin who was using a proprietary virtualization technology on Red Hat Enterprise Linux. Not only did he have surprising misconceptions about the FOSS (Free and Open Source Software) virtualization tools available, he assumed that some of the features he was paying extra for (or not, as the case may be) wouldn’t be in the FOSS versions of the software available.

Here are five features that you might be surprised to find included in the space of FOSS virtualization tools:

1. Data replication with verification for storage in server clusters

When you consider storage for a cluster there are several things to keep in mind:

Storage is part of your cluster too, you want it to be redundant
For immediate failover, you need replication between your storage devices
For data integrity, you want a verification mechanism to confirm the replication is working

Regardless of what you use for storage (a single hard drive, a RAID array, or an iSCSI device), the open source DRBD (Distributed Replicated Block Device) offers quick replication over a network backplane and verification tools you can run at regular intervals to ensure deta integrity.

Looking to the future, the FOSS distributed object store and file system Ceph is showing great promise for more extensive data replication.

2. Automatic failover in cluster configurations

Whether you’re using KVM Kernel-based Virtual Machine or Xen, automatic failover can be handled via a couple of closely integrated FOSS tools, Pacemaker and Corosync. At the core, Pacemaker handles core configuration of the resources themselves and Corosync handles quorum and “aliveness” checks of the hosts and resources and logic to manage moving Virtual Machines.

3. Graphical interface for administration

While development of graphical interfaces for administration is an active area, many of the basic tasks (and increasingly, more complicated ones) can be made available through the Virtual Machine Manager application. This manager uses the libvirt toolkit, which can also be used to build custom interfaces for management.

The KVM website has a list of other management tools, ranging from command-line (CLI) to Web-based:

As does the Xen wiki:

4. Live migrations to other hosts

In virtualized environments it’s common to reboot a virtual machine to move it from one host to another, but when shared storage is used it is also possible to do live migrations on KVM and Xen. During these live migrations, the virtual machine retains state as it moves between the physical machines. Since there is no reboot, connections stay intact and sessions and services continue to run with only a short blip of unavailability during the switch over.

Documentation for KVM, including hardware and software requirements for such support, can be found here:

5. Over-allocating shared hardware

KVM has the option to take full advantage of hardware resources by over-allocating both RAM (with adequate swap space available) and CPU. Details about over-allocation and key warnings can be found here: Overcommitting with KVM.


Data replication with verification for storage, automatic failover, graphical interface for administration, live migrations and over-allocating shared hardware are currently available with the FOSS virtualization tools included in many modern Linux distributions. As with all moves to a more virtualized environment, deployments require diligent testing procedures and configuration but there are many on-line resources available and the friendly folks at LinuxForce to help.

Click here for full Story