Login

Kernel Panic

kernel panic is an action taken by an operating system upon detecting an internal fatal error from which it cannot safely recover. The term is largely specific to Unix and Unix-like systems; for Microsoft Windows operating systems the equivalent term is “stop error” (or, colloquially, “Blue Screen of Death”).

The kernel routines that handle panics, known as panic() in AT&T-derived and BSD Unix source code, are generally designed to output an error message to the console, dump an image of kernel memory to disk for post-mortem debugging and then either wait for the system to be manually rebooted, or initiate an automatic reboot. The information provided is of a highly technical nature and aims to assist asystem administrator or software developer in diagnosing the problem. Kernel panics can also be caused by errors originating outside of kernel space. For example, many Unix OSes panic if the init process, which runs in userspace, terminates.

Causes

A panic may occur as a result of a hardware failure or a software bug in the operating system. In many cases, the operating system is capable of continued operation after an error has occurred. However, the system is in an unstable state and rather than risking security breaches and data corruption, the operating system stops to prevent further damage and facilitate diagnosis of the error and, in usual cases, restart.

After recompiling a kernel binary image from source code, a kernel panic during booting the resulting kernel is a common problem if the kernel was not correctly configured, compiled or installed. Add-on hardware or malfunctioning RAM could also be sources of fatal kernel errors during start up, due to incompatibility with the OS or a missing device driver. A kernel may also die with a panic message if it is unable to locate a root file system. During the final stages of kernel userspace initialization, a panic is typically triggered if the spawning of init fails, as the system would then be unusable.

If you’re seeing repeated kernel panics, try the following things until they go away.

Do a safe boot: Restart your Mac and hold down the Shift key until you see the gray Apple logo. Doing so temporarily disables some software that could cause problems and runs some cleanup processes. If the kernel panic doesn’t recur, restart again normally.

Update your software: Outdated software is frequently implicated in kernel panics. This may include OS X itself and, very rarely, regular applications. More often it involves low-level software like kernel extensions and drivers. If you’ve installed software that goes with peripherals (network adapters, audio interfaces, graphics cards, input devices, etc.) or antivirus, file-system, or screen-capture tools, those should be the first you check for newer versions. Choose Software Update from the Apple menu to update OS X, Apple apps, and items purchased from the Mac App Store; for other apps, use a built-in updater or check the developer’s website.

Update your firmware: Software Update may also tell you about available updates for your Mac. If so, be sure to install them. You can also check for any firmware updates applicable to your Mac model at http://support.apple.com/kb/ht1237.

Check your disk: Make sure your startup disk has at least 10GB of free space; if it doesn’t, delete some files to make room. Next, to find and fix any disk errors, start from another volume, run Disk Utility, select your startup disk, and click Repair Disk. (The easiest way to do this, if you’re running OS X 10.7 or later, is to restart and then immediately press and hold Command-R to enter OS X Recovery. If that doesn’t work, or if you have an older system, you can start up from a bootable duplicate of your hard disk or OS X install media.)

Check peripherals: If kernel panics continue, shut down your Mac and disconnect everything except the bare minimum (keyboard, pointing device, and display if those aren’t built in)—as well as any hardware you’ve added inside your Mac, such as a graphics card. Turn your Mac back on. If the problem doesn’t reappear, repeat the process, reattaching one device at a time. If you see a kernel panic right after connecting a piece of hardware, that may be your culprit.

Check your RAM: Defective RAM can cause kernel panics, and sometimes these defects manifest themselves only after time. If you’ve added any after-market RAM, try turning off your Mac, removing the extra RAM, and restarting. If that makes the kernel panics disappear, contact the company that sold you the RAM to see about a warranty replacement.

Cloud Computing

Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). Clouds can be classified as public, private or hybrid.

Overview

Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.

Cloud computing, or in simpler shorthand just “the cloud”, also focuses on maximizing the effectiveness of the shared resources. Cloud resources are usually not only shared by multiple users but are also dynamically reallocated per demand. This can work for allocating resources to users. For example, a cloud computer facility that serves European users during European business hours with a specific application (e.g., email) may reallocate the same resources to serve North American users during North America’s business hours with a different application (e.g., a web server). This approach should maximize the use of computing power thus reducing environmental damage as well since less power, air conditioning, rackspace, etc. are required for a variety of functions. With cloud computing, multiple users can access a single server to retrieve and update their data without purchasing licenses for different applications.

The term “moving to cloud” also refers to an organization moving away from a traditional CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to theOPEX model (use a shared cloud infrastructure and pay as one uses it).

Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of on infrastructure. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a “pay as you go” model. This can lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model.

The present availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing have led to a growth in cloud computing.

Cloud vendors are experiencing growth rates of 50% per annum.

 

Virtual Private Server

virtual private server (VPS) is a virtual machine sold as a service by an Internet hosting service.

A VPS runs its own copy of an operating system, and customers have superuser-level access to that operating system instance, so they can install almost any software that runs on that OS. For many purposes they are functionally equivalent to a dedicated physical server, and being software defined are able to be much more easily created and configured. They are priced much lower than an equivalent physical server, but as they share the underlying physical hardware with other VPSs, performance may be lower, and may depend on the workload of other instances on the same hardware node.

Hosting

Main article: Comparison of platform virtual machines

Many companies offer virtual private server hosting or virtual dedicated server hosting as an extension for web hosting services. There are several challenges to consider when licensing proprietary software in multi-tenant virtual environments.

With unmanaged or self managed hosting, the customer is left to administer his own server instance.

Unmetered hosting is generally offered with no limit on the amount of data-transferred on a fixed bandwidth line. Usually, unmetered hosting[2] is offered with 10 Mbit/s, 100 Mbit/s or 1000 Mbit/s (with some as high as 10Gbit/s). This means that the customer is theoretically able to use 3.33~ TB on 10 Mbit/s, 33~ TB on 100 Mbit/s and 333~ TB on a 1000 Mbit/s line per month (although in practice the values will be significantly less). In a virtual private server, this will be shared bandwidth and (should) mean there is a fair usage policy involved. Unlimited hosting is also commonly marketed but generally limited by acceptable usage policies and terms of service. Offers of unlimited disk space and bandwidth are always false due to cost, carrier capacities and technological boundaries.

Network Traffic Analysis

Network Traffic Analysis

Linux Tools For Network Analysis

Network problems

Networks are funny places where all sort of things happen in a matter of microseconds. Domain Name System (DNS) lookups are answered, and data blocks traverse the network as part of file-sharing protocols (such as SMB and NFS) while packets make their way from the Internet to your web browser. At any moment a network printer could go haywire and start broadcasting an endless stream of address resolution requests, or an NFS client could send mangled data to its server wreaking havoc on your work.

If you’ve done any systems administration work, you have probably seen these problems and dozen of others. Debugging them requires experience, as well as the right tools to diagnose what has gone wrong and to help determine what to do about it.

 

Network analysis

One of the most valuable tools in diagnosing a network problem, besides the manuals that come with all of your networking gear, is a network protocol analyzer. A network protocol analyzer listens to the network, then displays the data in a way that lets you watch things such as

  • interactions of clients and servers,
  • broadcasts,
  • packet storms, and
  • routing updates.

Commercial network analysis software packages can cost more than $1,000 for the software alone. Add a dedicated top-of-the-line laptop and a high-speed network controller, and the cost can easily exceed $5,000.

Fortunately, there are open source, Linux-based solutions that can give you all of the benefits of a commercial product (along with the ability to extend the software) at a fraction of the price.

Two packages that make network diagnostics and troubleshooting easier are Ethereal and Netwatch.

  • Ethereal is a “network sniffer” package that allows you to look at all of the traffic on a network.
  • Netwatch monitors traffic flow between clients and servers (such as between a web browser and a web server) and determines what ports are being used in those communications.

Ethereal

Ethereal

Netwatch

Gordon MacKay’s Netwatch utility, which runs in a terminal window, is invaluable for watching network loads and for seeing, at a higher level than Ethereal, who is talking to whom on your network. As shown in Figure 2, Netwatch monitors network bandwidth in terms of which hosts are producing and consuming packets.

Click for full size image

Figure 2. Netwatch monitors network bandwidth. (Click on image for full-size view)

Another useful mode of Netwatch, seen in Figure 3, shows which ports are involved in the communications between hosts. This can be very useful in seeing if the client/server applications on your network are using the ports that you expect them to use.

It can also alert you to potential trouble if you see hosts using protocol slots that should never be seen on your network. For example, if you see a service (such as TCP or UDP port) that shouldn’t be running, it could mean someone is running an unauthorized service on a machine (for example, a Quake GAME server) or that someone has broken in.

, as shown in Figure 1, is a GUI-based program that displays packet traffic on a network. In this figure, Ethereal displays several packets on my home network, including DNS lookup packets, NFS transactions, and e-mail being delivered via the POP3 protocol. The packet highlighted in this example is a WHO packet that is part of a protocol that reports on machine uptimes, and records who is logged in to which machine.

Click for full size image

Figure 1. Ethereal displays packet traffic on a network. (Click on image for full-size view)

In this example, the middle panel of Ethereal shows the decomposition of the WHO packet that contains sub-fields which describe who is logged into the machine that broadcast the packet along with other relevant machine info such as load averages and uptimes.

The bottom panel of Ethereal shows the actual packet-data as a hexadecimal dump of bytes.

Taken as a whole, Ethereal is a complete network traffic analysis tool. A short list of features includes:

  • A session tracer that shows network sessions as collections of transactions, rather than just as network packets
  • A text-mode tool that uses the Ethereal packet engine, then can be run from either an X-window terminal or in a shell window with no windowing support
  • Colorization modes for the packet displays
  • The ability to read dump files from other (commercial) network analyzer packages