An open letter to entry-level IT employees

IT employees

The world of IT can be unforgiving, especially for those just starting out. The keys to finding and maintaining success are humility, patience and an open mind.

You’ve graduated and prepped, perhaps you’ve even collected some certifications, and you’re now ready to take that first step into the corporate IT world as an entry-level IT employee. These first steps can be a challenging, but not just from a technical standpoint. Continue reading “An open letter to entry-level IT employees”

Best Web Hosting 2012 – Best Linux & Windows Web Hosting Providers

PR Web

San Francisco, CA (PRWEB) November 01, 2012

BestHostingSearch.com, a leading web hosting review site since 2006, named BlueHost as the best web hosting on Linux platform and WinHost as the best web hosting on Windows platform for personal and small businesses based on their web hosting technology, features, reliability, performance, technical support, and affordability.

Best Web Hosting 2012 > PHP, ROR, Python, Perl

BlueHost is awarded as the best web hosting provider in 2012 based on Linux platform for hosting sites developed on PHP, Zend Framework, Ruby on Rails, Python and Perl. BlueHost offers a single all-in-one unlimited web hosting plan named “BlueHost Professional” which is the most rich-featured Linux-based hosting plan. Besides supporting almost all the modern programming languages, it includes one free domain & $100-value Google AdWords credits, allows one to host unlimited sites on one account by paying once, and provides Shell Access (SSH), email, MySQL, PostgreSQL, FTP, SSL, etc. It starts at $6.95/mo regularly, but now BlueHost is offering a 44% discount for $3.95/mo for all visitors who go through this BlueHost promotional link.

BlueHost was founded in 1996 and designed to offer the affordable hosting solution to personal and small businesses. Unlike many other web hosts, BlueHost has 3 world-class dedicated data centers in Provo Utah, which had been invested with 20+ millions of USD since 2010. All the data centers are Eco-friendly, with the Internet bandwidth exceeding 150,000 Mbits totally. So far, BlueHost is serving for 2.5 million customers worldwide and it’s fast growing with approximately 20,000 new customers added for each month.

To read the in-depth BlueHost review, visit http://besthostingsearch.com/bluehost-review

Best Web Hosting 2012 > ASP.NET, ASP.NET MVC, Silverlight

WinHost is awarded as the best web hosting provider in 2012 based on Windows platform for hosting sites developed on ASP.NET, ASP.NET MVC and Silverlight technology. WinHost is the most developer and technology friendly web hosting company of all which BestHostingSearch.com had reviewed so far. WinHost supports almost all the latest cutting-edge Microsoft technology. It’s one of the first ASP.NET hosting providers that declaim to support Windows Server 2012, ASP.NET 4.5 and MSSQL 2012. Now, the WinHost ASP.NET hosting plan supports Windows Server 2008R2 & Windows Server 2012, MSSQL 2008R2 & MSSQL 2012, ASP.NET 2/3.5SP1/4/4.5, ASP.NET 2/3/4, Silverlight 4/5, and provides Full Trust configuration, remote IIS management capability, and remote MSSQL management capability.

WinHost ASP.NET hosting starts at $4.95/mo regularly, but now it’s offering a special discount for 2 months free for annual billing at $4.12/mo effectively for all visitors who go through this WinHost promotional link.

To learn more about the award of best web hosting providers in 2012, visit http://besthostingsearch.com/web-hosting-guide/best-web-hosting-2012 >>

About BestHostingSearch.com
BestHostingSearch.com is a leading web hosting review site since 2006. It ranks web hosting providers based on their real hosting experience and the reviews collected from real customers, designed to help people find the best web hosting deal, saving time and money from a bad choice.

Click here for full Story

Google Gets Serious about Chrome Security on Linux

google chromeGoogle was a bit slow in the beginning getting its Chrome browser ready for Linux. That’s now changing as Google is now set to take advantage of an advanced Linux kernel feature that could well make Chrome on Linux more secure than any other OS.

Chrome 23 dev-channel now takes advantage of the Seccomp-BPF feature that debuted in the recent Linux 3.5 kernel.

“Seccomp filtering provides a means for a process to specify a filter for incoming system calls,” kernel develop Will Drewry wrote in a mailing list message.

Google developer Julien Tinnes explainedthat,”with Seccomp-BPF, BPF programs can now be used to evaluate system call numbers and their parameters.”

In very basic terms, it means more control over the sandbox and less chance of escape for some kind arbitrary code execution.

Click here for full Story

Alibaba’s Aliyun OS hopes to be ‘Android of China’

Alibaba is pushing its Linux-based mobile operating system (OS), Aliyun, to become the “Android of China” and provide another choice for smartphone makers.

A report from Sohu IT on Monday cited Alibaba Group’s chief strategy officer Zeng Ming as saying the Aliyun OS is gaining traction among mobile phonemakers. “We want to be the Android of China and we have quite a lot of new partners in line,” he said.

According to Zeng, the total number of smartphone vendors adopting Aliyun will increase to five from the current two, namely K-Touch and Haier, by the end of the year. However, he declined to reveal the names of other vendors.

“If I were a mobile phone vendor and my only choice is Android, I will be quite scared. Any company will want to have at least two suppliers,” Zeng said.

The original Android OS from Google faces several challenges in China as Google Search, Google Maps, and Gmail functionalities are limited in the country, he said. That is why Android is not able to provide a good user experience while Aliyun can, he added.

However, Sohu IT noted several handset makers believe it will be difficult to completely replace Android. This is due to the mobile OS’s ecosystem of phonemakers and app developers, it said.

Click here for full Story

Google Chrome 21 is out

google chromeGoogle Chrome is a browser that combines a minimal design with sophisticated technology to make the web faster, safer, and easier. Google Chrome version 21.0.1180.60 (21.0.1180.57 for Mac and Linux) is out, fixing 15 security vulnerabilities in the search giant’s browser. Strictly from a security perspective, you should upgrade as soon as possible and download the latest version of Chrome directly from google.com/chrome.

Download Latest Google Chrome

google Chrome

Security fixes and rewards:

Please see the Chromium security page for more detail. Note that the referenced bugs may be kept private until a majority of our users are up to date with the fix.

  • [Linux only] [125225] Medium CVE-2012-2846: Cross-process interference in renderers. Credit to Google Chrome Security Team (Julien Tinnes).
  • [127522] Low CVE-2012-2847: Missing re-prompt to user upon excessive downloads. Credit to Matt Austin of Aspect Security.
  • [127525] Medium CVE-2012-2848: Overly broad file access granted after drag+drop. Credit to Matt Austin of Aspect Security.
  • [128163] Low CVE-2012-2849: Off-by-one read in GIF decoder. Credit to Atte Kettunen of OUSPG.
  • [130251] [130592] [130611] [131068] [131237] [131252] [131621] [131690] [132860] Medium CVE-2012-2850: Various lower severity issues in the PDF viewer. Credit to Mateusz Jurczyk of Google Security Team, with contributions by Gynvael Coldwind of Google Security Team.
  • [132585] [132694] [132861] High CVE-2012-2851: Integer overflows in PDF viewer. Credit to Mateusz Jurczyk of Google Security Team, with contributions by Gynvael Coldwind of Google Security Team.
  • [134028] High CVE-2012-2852: Use-after-free with bad object linkage in PDF. Credit to Alexey Samsonov of Google.
  • [134101] Medium CVE-2012-2853: webRequest can interfere with the Chrome Web Store. Credit to Trev of Adblock.
  • [134519] Low CVE-2012-2854: Leak of pointer values to WebUI renderers. Credit to Nasko Oskov of the Chromium development community.
  • [134888] High CVE-2012-2855: Use-after-free in PDF viewer. Credit to Mateusz Jurczyk of Google Security Team, with contributions by Gynvael Coldwind of Google Security Team.
  • [134954] [135264] High CVE-2012-2856: Out-of-bounds writes in PDF viewer. Credit to Mateusz Jurczyk of Google Security Team, with contributions by Gynvael Coldwind of Google Security Team.
  • [$1000] [136235] High CVE-2012-2857: Use-after-free in CSS DOM. Credit to Arthur Gerkis.
  • [$1000] [136894] High CVE-2012-2858: Buffer overflow in WebP decoder. Credit to Jüri Aedla.
  • [Linux only] [137541] Critical CVE-2012-2859: Crash in tab handling. Credit to Jeff Roberts of Google Security Team.
  • [137671] Medium CVE-2012-2860: Out-of-bounds access when clicking in date picker. Credit to Chamal de Silva.

For Chrome 21, Google paid security researchers a grand total $2,000 in rewards as part of its bug bounty program. This payout is smaller than usual since Google found most of the vulnerabilities this time, using its own AddressSanitizer tool.

Still, Mountain View recently quintupled its maximum bug bounty to $20,000. The company has so far received about 800 qualifying vulnerability reports that span across the hundreds of Google-developed services, as well as the software written by 50 or so firms it has acquired. In just over a year, the program has paid out around $460,000 to roughly 200 individuals.

For the record, Google Chrome 20 was released just five weeks ago (and then updated again three weeks ago). At the time, I expected Chrome 21 to be released “sometime in August.” It turns out I was off by a day.

Click here for full Story

Insanity: Google Sends New Link Warnings, Then Says You Can Ignore Them

Google’s war on bad links officially became insane today. For months, Google’s sending out warnings about bad links and telling publishers they should act on those, lest they get penalized. Today, Google said the latest round of warnings sent out this week can be safely ignored. That’s not “more transparency” as Google posted. That’s more confusion.

It’s easiest to do the history first, to better understand the confusion caused by today’s post.

How We Got Here: Link Warnings Earlier This Year

Toward the end of March and in early April, Google began sending out warnings about “artificial” or “unnatural” links, such like this one:

Dear site owner or webmaster of….We’ve detected that some of your site’s pages may be using techniques that are outside Google’s Webmaster Guidelines.

Specifically, look for possibly artificial or unnatural links pointing to your site that could be intended to manipulate PageRank. Examples of unnatural linking could include buying links to pass PageRank or participating in link schemes.

We encourage you to make changes to your site so that it meets our quality guidelines. Once you’ve made these changes, please submit your site for reconsideration in Google’s search results.

If you find unnatural links to your site that you are unable to control or remove, please provide the details in your reconsideration request.

If you have any questions about how to resolve this issue, please see our Webmaster Help Forum for support.

Sincerely, Google Search Quality Team

There was some confusion about whether these messages meant that a site was actually penalized for having these links pointing at them or whether Google was just informing the sites but not really taking any negative action. Google’s response on this wasn’t clear:

Google has been able to trace and take action on many types of link networks; we recently decided to make that action more visible.In the past, some links might have been silently distrusted or might not have carried as much weight. More recently, we’ve been surfacing the fact that those links aren’t helping to improve ranking or indexing.

The Penguin Attacks

In late April, the Google Penguin Update went live. Designed to fight spam, it especially seemed to take action by either penalizing publishers who had participated in bad linking activities (as determined by Google’s) or discounting those links, so they no longer carried as much weight.

All hell broke loose in some quarters, especially among those who had been actively using link networks to boost their rankings in ways that went against Google’s guidelines. One of the suggested recovery options from Google was to remove bad links.

Google Advice: Get Rid Of Bad Links

But what if people couldn’t get links taken down? The head of Google’s web spam team, Matt Cutts, just generally suggested such a thing was possible without giving any specific advice.

This led further support to those who argued that “negative SEO” was now suddenly a real possibility, that any publisher could be targeted with “bad links” and made to plunge in Google’s rankings. Google stressed that negative SEO in this way is rare and hard. To this date, negative SEO still hasn’t seemed to be a wide-spread problem for the vast majority of publishers on the web.

Those reassurances — along with a Google help page update saying Google “works hard to prevent” negative SEO — hasn’t calmed some. Negative SEO has remained a rallying cry especially for many hit by Penguin (and many were deservedly hit) looking for a way to fight back against Google.

The New Link Building: Remove My Link Requests

But aside from the negative SEO sideshow, plenty of publishers tried to follow Google’s advice to get links removed. I’ve even had one come to me, from some publisher who was listed in our SearchCap daily newsletter in the past and wanted us to pull down a link. Insane. A link from a reputable site like ours is exactly what you want, and yet they wanted it removed.

The insanity has gotten even worse. We’ve had people threatening to sue to have links removed. We’ve covered that before. Boing Boing also covered another case today (without providing any of the background on how Google itself has fueled some of this craziness).

Today, we covered how some directories are now charging people to have links removed. Let’s be really clear on how topsy-turvey that means things have become. People have wanted links in the past and have been willing to pay for them (despite this being against Google’s rules). Now they’re perhaps willing to pay to have links taken down.

June: Google Says Don’t Ignore Link Warnings

But you’ve got to get those links removed, if you’ve gotten a warning message. After all, Google has said that. In June, at our SMX Advanced conference, Cutts said this about those link warnings:

You should pay attention. Typically your web site ranking will drop if you don’t take action after you get one of those notices.

Here’s the extended video clip on the topic:

But again, what to do if you can’t get links removed? Cutts said that Google might release a “disavow” tool. By the end of June, Bing even did launch such a link disavow tool — not that it helped with Google, of course. Those who had notices from Google about bad links pointing at them, notices they were supposed to take action on, still might not be able to get those links removed.

New Batch Of Warnings Goes Out

That leads to yesterday, when Google began sending out a new batch of link notices. Here’s an example of what one of those looks like:

Dear site owner or webmaster of….We’ve detected that some of your site’s pages may be using techniques that are outside Google’s Webmaster Guidelines.

Specifically, look for possibly artificial or unnatural links pointing to your site that could be intended to manipulate PageRank. Examples of unnatural linking could include buying links to pass PageRank or participating in link schemes.

We encourage you to make changes to your site so that it meets our quality guidelines. Once you’ve made these changes, please submit your site for reconsideration in Google’s search results.

If you find unnatural links to your site that you are unable to control or remove, please provide the details in your reconsideration request.

If you have any questions about how to resolve this issue, please see our Webmaster Help Forum for support.

Sincerely, Google Search Quality Team

Yes, that’s exactly the same content as what Google sent in late March. Nothing in the message gives the impression it can be ignored. It even encourages people who can’t get links removed to actively file a reconsideration request with Google.

July: Google Says You Can Ignore Link Warnings

But today, Cutts said this about the messages in a Google+ post:

If you received a message yesterday about unnatural links to your site, don’t panic. In the past, these messages were sent when we took action on a site as a whole.Yesterday, we took another step towards more transparency and began sending messages when we distrust some individual links to a site. While it’s possible for this to indicate potential spammy activity by the site, it can also have innocent reasons.

For example, we may take this kind of targeted action to distrust hacked links pointing to an innocent site. The innocent site will get the message as we move towards more transparency, but it’s not necessarily something that you automatically need to worry about.

If we’ve taken more severe action on your site, you’ll likely notice a drop in search traffic, which you can see in the “Search queries” feature Webmaster Tools for example.

As always, if you believe you have been affected by a manual spam action and your site no longer violates the Webmaster Guidelines, go ahead and file a reconsideration request. It’ll take some time for us to process the request, but you will receive a followup message confirming when we’ve processed it.

Like I said, this latest round of messages doesn’t seem to make things more transparent. The messages seem to be the same exact ones that Google previously told people they SHOULD worry about.

How About Just Saying If There’s A Real Concern

How do you know if you’re at risk if you get one of these new messages? Apparently, you also need to look at your traffic from Google and see if there’s a plunge. If so, you have a bad link problem. If not, well, you got a message that apparently can be ignored.

It would sure be much easier if the messages themselves said if action was really required or not. If there really was a penalty or not (in a world now where penalties that were penalties now might be “adjustments”).

That would be transparent. Instead, I predict this is all just going to cause greater confusion and panic, not more clarity and calmness.

It’s also yet another sign of how creaky the foundations or ranking sites based on links has become. It gets even more difficult these days to know what’s supposed to help or hurt. Links as votes suck.

Postscript: Google’s Matt Cutts commented below on Monday, July 23rd that the newer messages that can be safely ignored are now actually saying that:

An engineer worked over the weekend and starting with the messages that we sent out on Sunday, the messages are now different so that you can tell which type of situation you’re in. We also changed the UI in the webmaster console to remove the yellow caution sign for these newer messages. That reflects the fact that these newer notifications are much more targeted and don’t always require action by the site owner.

See also our follow-up story: Google Updates Link Warnings To (Sort Of) Clarify They Can Be Ignored (Maybe).

Read Full Article

5 things about FOSS Linux virtualization you may not know

In January I attended the 10th annual Southern California Linux Expo. In addition to speaking and running the Ubuntu booth, I had an opportunity to talk to other sysadmins about everything from selection of distribution to the latest in configuration management tools and virtualization technology.

I ended up in a conversation with a fellow sysadmin who was using a proprietary virtualization technology on Red Hat Enterprise Linux. Not only did he have surprising misconceptions about the FOSS (Free and Open Source Software) virtualization tools available, he assumed that some of the features he was paying extra for (or not, as the case may be) wouldn’t be in the FOSS versions of the software available.

Here are five features that you might be surprised to find included in the space of FOSS virtualization tools:

1. Data replication with verification for storage in server clusters

When you consider storage for a cluster there are several things to keep in mind:

Storage is part of your cluster too, you want it to be redundant
For immediate failover, you need replication between your storage devices
For data integrity, you want a verification mechanism to confirm the replication is working

Regardless of what you use for storage (a single hard drive, a RAID array, or an iSCSI device), the open source DRBD (Distributed Replicated Block Device) offers quick replication over a network backplane and verification tools you can run at regular intervals to ensure deta integrity.

Looking to the future, the FOSS distributed object store and file system Ceph is showing great promise for more extensive data replication.

2. Automatic failover in cluster configurations

Whether you’re using KVM Kernel-based Virtual Machine or Xen, automatic failover can be handled via a couple of closely integrated FOSS tools, Pacemaker and Corosync. At the core, Pacemaker handles core configuration of the resources themselves and Corosync handles quorum and “aliveness” checks of the hosts and resources and logic to manage moving Virtual Machines.

3. Graphical interface for administration

While development of graphical interfaces for administration is an active area, many of the basic tasks (and increasingly, more complicated ones) can be made available through the Virtual Machine Manager application. This manager uses the libvirt toolkit, which can also be used to build custom interfaces for management.

The KVM website has a list of other management tools, ranging from command-line (CLI) to Web-based: www.linux-kvm.org/page/Management_Tools

As does the Xen wiki: wiki.xen.org/wiki/Xen_Management_Tools

4. Live migrations to other hosts

In virtualized environments it’s common to reboot a virtual machine to move it from one host to another, but when shared storage is used it is also possible to do live migrations on KVM and Xen. During these live migrations, the virtual machine retains state as it moves between the physical machines. Since there is no reboot, connections stay intact and sessions and services continue to run with only a short blip of unavailability during the switch over.

Documentation for KVM, including hardware and software requirements for such support, can be found here: www.linux-kvm.org/page/Migration

5. Over-allocating shared hardware

KVM has the option to take full advantage of hardware resources by over-allocating both RAM (with adequate swap space available) and CPU. Details about over-allocation and key warnings can be found here: Overcommitting with KVM.

Conclusion

Data replication with verification for storage, automatic failover, graphical interface for administration, live migrations and over-allocating shared hardware are currently available with the FOSS virtualization tools included in many modern Linux distributions. As with all moves to a more virtualized environment, deployments require diligent testing procedures and configuration but there are many on-line resources available and the friendly folks at LinuxForce to help.

Click here for full Story

VMware vs. Microsoft: Server Virtualization Showdown Heats Up

VMware and Microsoft are increasingly at each other’s throat these days, and that can mean only one thing — each company sees the other as a real threat when it comes to competing in the server virtualization and private cloud computing markets.

VMware has long dominated the server virtualization technology market, and even though the second iteration of Microsoft’s Hyper-V hypervisor is much better than the first one, VMware still looks like top dog. So while the company has had to suffer the constant sniping of Microsoft, including Microsoft’s risible attempts to ridicule “VM Limited” with its Tad Talks videos, VMware has been content to rise above it all and simply say nothing to this point.

But with Windows Server 2012 and the third iteration of Hyper-V just over the horizon, things are changing. Virtually Speaking – RoundedAt Microsoft’s TechEd in San Francisco this week, Jeff Woolsey, the company’s server virtualization head honcho, boasted that the new Hyper-V will be formidable.

“The guys at VMware claim that they can deliver up to 300,000 IOPS from a single VM. With Windows Server 2012, we’re delivering 985,000 IOPS from a single virtual machine, more than three times [the IOPS claims of VMware],” Woolsey said. “We can go much, much higher, but this is as fast as the hardware will go.”

Sooner or later it was inevitable that VMware would stop ignoring Microsoft and fight back (just as it was inevitable that Microsoft would eventually stop ignoring Apple and fight back in that arena). That’s because VMware’s platform is generally perceived to be much more expensive than Microsoft’s, and once Hyper-V 3 is available there will be fewer and fewer reasons to pay the VMware “tax”.

Click here for full Story

Too Funny: Google wants .LOL Domain

Google has applied to control a slew of domain names that are not only related to its core business but have “interesting and creative potential.”

The company announced on its blog on Thursday that it has submitted applications to the Internet Corporation for Assigned Names and Numbers (ICANN) for the following domains: .google, .youtube, .doc and .lol.

ICANN, which assigns top-level domains to sites worldwide, will announce in June which domains will be added to the existing list that includes “.com” or “.gov.”

“In 2016, it’s estimated that almost half of the world’s population will be online, yet nearly 50 percent of the websites we visit are found in the .com top-level domain (TLD), which was among the first TLDs created in 1984,” Google wrote on its Official Blog.

“Despite the great opportunities the web has enabled for people around the world, there is still a lingering question about the diversity of the domain space (given that the number of generic TLDs has only increased by 14 in the last 28 years).”

Google aims to grow the number of TLDs in four categories, such as trademarks (such as .google), those related to its core business (.docs), ones that improve the user experience and increase the identification of certain genres (.youtube) and fun options (.lol).

The search engine giant also noted that it plans to keep security and abuse prevention top of mind in creating a positive experience for web users.

“We’re just beginning to explore this potential source of innovation on the web, and we are curious to see how these proposed new TLDs will fare in the existing TLD environment,” the company says.

“By opening up more choices for Internet domain names, we hope people will find options for more diverse — and perhaps shorter — signposts in cyberspace.”

What do you think of Google’s choice for new top-level domain names? Which ones would you like to see pop up on the web? Let us know in the comments.

Original Article : http://mashable.com/2012/05/31/google-domain/