Lorem Ipsum is that it has a more-or-less normal distribution of letters, as opposed to using ‘Content here, content here’, making it look like readable English. Many desktop publishing packages and web page editors now use Lorem Ipsum as their default model text, and a search for ‘lorem ipsum’ will uncover many web sites still in their infancy. Various versions have evolved over the years Read More »
IT Monteur Server management service is the proactively maintenance of a servers. Many company have one or more server from data center, where most data center do not provide managed service, even if the data center provide the managed services, that include only few tasks, in most case data center takes responsibility to provide the server hardware, network and OS.
For running any web application on the server, it required to setup the web server software like apache, IIS, data base software like mysql, ms sql, configure php, asp.net
when the server runs 24 x 7 to service the web application, it required day to day maintenance. we manage the server proactively to run the services and take responsibility for backup and restore your application in case any disaster
Our server management cover following things
Server Setup and Management Services in Delhi – India
Initial server setup
Control panel installation and configuration
Firewall installation and configuration
Software and script installation and configuration
Anti-spam and anti-virus installation
Mail server setup
MySQL server setup
DNS setup and configuration
Application Server Management Services in Delhi – India
Magneto E commerce Platform Setup, Configuration and Management
Joomla CMS Setup, Configuration and Management
Wordpress CMS Setup, Configuration and Management
Microsoft Share Point Server Setup, Configuration and Management
Microsoft Windows Small Business Server Setup, Configuration and Management
Java Tomcat Server Setup, Configuration and Management
MySQL, MS SQL, Oracle Database Server Setup, Configuration and Management
Microsoft MS Exchange Server Setup, Configuration and Management
Server Management and Monitoring Services in Delhi – India
With today’s multi–vendor IT environments, administrators need a server monitoring services that works out–of–the–box over multiple technologies and platforms, be it Windows or Linux, Solaris, Unix, VMware, AIX, HP–UX etc.. Our Monitoring Service provides a single, comprehensive console for your server monitoring needs by using SNMP, WMI, CLI and Telnet⁄SSH to monitor your server infrastructure regardless of device type or make.
Windows Server Administration, Windows Server Management Services in Delhi – India
Windows Server Administration, Windows Server Management Services in Delhi India
Our windows server management includes base server, IIS, MS FTP, MS SQL database without any control panel, we also manage all type of windows hosting control panel, like website panel, hosting controler, plesk
Linux Server Administration, Linux Server Management Services in Delhi – India
Linux Server Administration, Linux Server Management Services in Delhi India
On Linux server we manage apache, mysql, DNS without any control panel, we also manage all type of linux control panel, like webmin, virtualmin, cpanel, direct admin.
We have extensive expertise in Managing all type of Web Hosting Control panel which support both Windows & Linux server.
Server Security Management Services in Delhi – India
Server Security Management Services in Delhi – India
Please keep in mind Server Management, monitoring and its security is a way of life and a life style. It is a set of procedures and policies that must be malleable, and yet followed consistently.
There is no such thing as a one time hardening just like there is no such thing as a one time anti-virus install. As new threats or attacks arise, it’s important that you never allow yourself to feel too safe or too secure, always check things out, always be open to learning new security philosophies, and always be on the look out for suspicious activity on your machines. for more details check out our server security management services.
Cloud Infrastructure Management Service in Delhi India
Windows & Linux Server Management/Administration/Support Services in Delhi, Noida, Ghaziabad, Gurgaon, Kolkota, Bangalore, Mumbai, Chennai, India, as well as USA, UK, UAE, Dubai, all over world
Call us on +91 120 649 8887
or
Email us on sales@itmonteur.net
Logical volume management provides a higher-level view of the disk storage on a computer system than the traditional view of disks and partitions. This gives the system administrator much more flexibility in allocating storage to applications and users.
Storage volumes created under the control of the logical volume manager can be resized and moved around almost at will, although this may need some upgrading of file system tools.
The logical volume manager also allows management of storage volumes in user-defined groups, allowing the system administrator to deal with sensibly named volume groups such as “development” and “sales” rather than physical disk names such as “sda” and “sdb”.
Logical volume management is traditionally associated with large installations containing many disks but it is equally suited to small systems with a single disk or maybe two.
One of the difficult decisions facing a new user installing Linux for the first time is how to partition the disk drive. The need to estimate just how much space is likely to be needed for system files and user files makes the installation more complex than is necessary and some users simply opt to put all their data into one large partition in an attempt to avoid the issue.
Once the user has guessed how much space is needed for /home /usr / (or has let the installation program do it) then is quite common for one of these partitions to fill up even if there is plenty of disk space in one of the other partitions.
With logical volume management, the whole disk would be allocated to a single volume group and logical volumes created to hold the / /usr and /home file systems. If, for example the /home logical volume later filled up but there was still space available on /usr then it would be possible to shrink /usr by a few megabytes and reallocate that space to /home.
Another alternative would be to allocate minimal amounts of space for each logical volume and leave some of the disk unallocated. Then, when the partitions start to fill up, they can be expanded as necessary.
As an example: Joe buys a PC with an 8.4 Gigabyte disk on it and installs Linux using the following partitioning system:
This, he thinks, will maximize the amount of space available for all his MP3 files.
Sometime later Joe decides that he want to install the latest office suite and desktop UI available but realizes that the root partition isn’t large enough. But, having archived all his MP3s onto a new writable DVD drive there is plenty of space on /home.
His options are not good:
Reformat the disk, change the partitioning scheme and reinstall.
Buy a new disk and figure out some new partitioning scheme that will require the minimum of data movement.
Set up a symlink farm on / pointing to /home and install the new software on /home
With LVM this becomes much easier:
Jane buys a similar PC but uses LVM to divide up the disk in a similar manner:
boot is not included on the LV because bootloaders don’t understand LVM volumes yet. It’s possible boot on LVM will work, but you run the risk of having an unbootable system.
root on LV should be used by advanced users only
root on LVM requires an initrd image that activates the root LV. If a kernel is upgraded without building the necessary initrd image, that kernel will be unbootable. Newer distributions support lvm in their mkinitrd scripts as well as their packaged initrd images, so this becomes less of an issue over time.
When she hits a similar problem she can reduce the size of /home by a gigabyte and add that space to the root partition.
Suppose that Joe and Jane then manage to fill up the /home partition as well and decide to add a new 20 Gigabyte disk to their systems.
Joe formats the whole disk as one partition (/dev/hdb1) and moves his existing /home data onto it and uses the new disk as /home. But he has 6 gigabytes unused or has to use symlinks to make that disk appear as an extension of /home, say /home/joe/old-mp3s.
Jane simply adds the new disk to her existing volume group and extends her /home logical volume to include the new disk. Or, in fact, she could move the data from /home on the old disk to the new disk and then extend the existing root volume to cover all of the old disk.
Disk Management will contribute more topics than any other in RHCE 133 paper, its a vast subject to discuess. From today up to some days I will be posting topics on disk management, so stay tuned you linux learners.
Linux OS will consider every thing as files even hardware too, for example if there is one hard disk in a Linux system then it is represented as hda(harddisk “A”)under /dev folder
For example if we have 2 Hard disks then the representation is like below /dev/hda
/dev/hdb Where /dev/hda is primary master HDD and /dev/hdb is primary slave HDD
If we want to represent floppy drive then the representation is as below /dev/fd0
If we want to represent secondfloppy drivethen the representation is /dev/fd1
If we want to representcdrom
/dev/cdrom
If we want to represent DVD-writer
/dev/dvdwriter
If we want to represent special devices such asSATA,USB-mass storageetc then the representations are as below /dev/sda
/dev/sdb
/dev/sdc till up to /dev/sdz
Suppose if we ant to represent partitions onHDD it goes as below /dev/hda0 for first partition in primary master HDD
/dev/hda1 for second partition in primary slave HDD
/dev/sdd5 for forth partition in special device 4
Before creating any partitions we should remember the following things. a. Check for what purpose we want to create the partitions(for example for creating swap) b. Check weather any free space left by using fdisk -l command
So if there is any free space then we can directly create partitions
step1 : Check there is any free space or not #fdisk -l
Disk /dev/hda: 20.0 GB, 20060651520 bytes255 heads, 63 sectors/track, 2438 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/hda1 1 1024 8225248+ b W95 FAT32/dev/hda2 * 1025 2438 11357955 c W95 FAT32 (LBA)Disk /dev/hdb: 80.0 GB, 80060424192 bytes255 heads, 63 sectors/track, 9733 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/hdb1 * 1 2432 19535008+ 83 Linux/dev/hdb2 2433 2554 979965 82 Linux swap / Solaris/dev/hdb3 2555 6202 29302560 83 Linux/dev/hdb4 6203 9733 28362757+ 5 Extended/dev/hdb5 6203 9733 28362726 83 Linux
fdisk is a command which will show all the disks present in the system weather it is partition or free space. From the above out put we can clearly know that the system is having 2 harddisks one with 20GB and the other with 80GB(with red mark). This is an interview question how to find Harddisk size in linux.
Step2 : Use fdisk command on the disk in order to create the partitions #fdisk /dev/hda
Here it will show full details of /dev/hda Disk /dev/hdb: 80.0 GB, 80060424192 bytes255 heads, 63 sectors/track, 9733 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/hdb1 * 1 2432 19535008+ 83 Linux/dev/hdb2 2433 2554 979965 82 Linux swap / Solaris/dev/hdb3 2555 6202 29302560 83 Linux/dev/hdb4 6203 9733 28362757+ 5 Extended/dev/hdb5 6203 9733 28362726 83 Linuxcommand (m for help):
So press m to explore your self .
Step3 : Creating new partition.
Press n with out quote to create new partition. So specify your size in prbytes/kb/mb with +preseeding to the value suppose if you want to create a new partition of 23mb you have to say +23MB then pressenter
One more example create a new partion of 538kb? thinking????????? still thinking?????? yes you are right it is +538KB
Step4 :So what next? Suppose you donot want to create this partition so delete it. At this point the partition table changes are not updated to partition table to do this one just type q.
Step5 : Updating the created partition to partition table just press w with out quotes thats it you are almost done.
Step6 :So you have updated the changes to partition table then you have to say this change to Kernel so for doing that there are two ways
A. Just restart the system(so you are thinking this is easiest way ha? never do this on live servers because you have to give 99.999% uptime to your servers. so always do second way.
B. Just execute partprob command this will update the partition table changes to kernel #partprob /dev/hda
rsync is a file synchronization and file transfer program for Unix-like systems that minimizes network data transfer by using a form ofdelta encoding called the rsync algorithm. rsync can compress the data transferred further using zlib compression, and SSH orstunnel can be used to encrypt the transfer.
rsync is typically used to synchronize files and directories between two different systems, one local and one remote. For example, if the command rsync local-file user@remote-host:remote-file is run, rsync will use SSH to connect as user to remote-host.[4] Once connected, it will invoke another copy of rsync on the remote host, and then the two programs will talk to each other over the connection, working together to determine what parts of the file are already on the remote host and don’t need to be transferred over the connection.
rsync can also operate in daemon mode, where it listens by default on TCP port 873, serving files in the native rsync protocol (using the “rsync://” syntax).
It is released under the GNU General Public License version 3 and is widely used.
Uses
rsync originated as a replacement for rcp and scp. As such, it has a similar syntax to its parent programs.[11] Like its predecessors, it requires the specification of a source and of a destination; either of them may be remote, but not both. Because of the flexibility, speed and scriptability of rsync, it has become a standard Linux utility, included in all popular Linux distributions. It has been ported to Windows (via Cygwin, Grsync or SFU) and Mac OS.
…where SRC is the file or directory (or a list of multiple files and directories) to copy from, and DEST represents the file or directory to copy to. (Square brackets indicate optional parameters.)
rsync can synchronize Unix clients to a central Unix server using rsync/ssh and standard Unix accounts. It can be used in desktop environments, for example to efficiently synchronize files with a backup copy on an external hard drive. A scheduling utility such as cron can carry out tasks such as automated encrypted rsync-based mirroring between multiple hosts and a central server.
In environments with multiple users, it is very important to use shadow passwords provided by the shadow-utils package to enhance the security of system authentication files. For this reason, the installation program enables shadow passwords by default.
The following is a list of the advantages shadow passwords have over the traditional way of storing passwords on UNIX-based systems:
Shadow passwords improve system security by moving encrypted password hashes from the world-readable /etc/passwd file to /etc/shadow, which is readable only by the root user.
Shadow passwords store information about password aging.
Shadow passwords allow the /etc/login.defs file to enforce security policies.
Most utilities provided by the shadow-utils package work properly whether or not shadow passwords are enabled. However, since password aging information is stored exclusively in the /etc/shadow file, any commands which create or modify password aging information do not work. The following is a list of utilities and commands that do not work without first enabling shadow passwords:
While users can be either people (meaning accounts tied to physical users) or accounts which exist for specific applications to use, groups are logical expressions of organization, tying users together for a common purpose. Users within a group can read, write, or execute files owned by that group.
Each user is associated with a unique numerical identification number called a user ID (UID). Likewise, each group is associated with a group ID (GID). A user who creates a file is also the owner and group owner of that file. The file is assigned separate read, write, and execute permissions for the owner, the group, and everyone else. The file owner can be changed only by root, and access permissions can be changed by both the root user and file owner.
Additionally, Red Hat Enterprise Linux supports access control lists (ACLs) for files and directories which allow permissions for specific users outside of the owner to be set. For more information about this feature, refer to the Access Control Lists chapter of the Storage Administration Guide.
User Private Groups
Red Hat Enterprise Linux uses a user private group (UPG) scheme, which makes UNIX groups easier to manage. A user private group is created whenever a new user is added to the system. It has the same name as the user for which it was created and that user is the only member of the user private group.
User private groups make it safe to set default permissions for a newly created file or directory, allowing both the user and the group of that user to make modifications to the file or directory.
The setting which determines what permissions are applied to a newly created file or directory is called a umask and is configured in the /etc/bashrc file. Traditionally on UNIX systems, the umask is set to 022, which allows only the user who created the file or directory to make modifications. Under this scheme, all other users, including members of the creator’s group, are not allowed to make any modifications. However, under the UPG scheme, this “group protection” is not necessary since every user has their own private group.
A kernel panic is an action taken by an operating system upon detecting an internal fatal error from which it cannot safely recover. The term is largely specific to Unix and Unix-like systems; for Microsoft Windows operating systems the equivalent term is “stop error” (or, colloquially, “Blue Screen of Death”).
The kernel routines that handle panics, known as panic() in AT&T-derived and BSD Unix source code, are generally designed to output an error message to the console, dump an image of kernel memory to disk for post-mortem debugging and then either wait for the system to be manually rebooted, or initiate an automatic reboot. The information provided is of a highly technical nature and aims to assist asystem administrator or software developer in diagnosing the problem. Kernel panics can also be caused by errors originating outside of kernel space. For example, many Unix OSes panic if the init process, which runs in userspace, terminates.
Causes
A panic may occur as a result of a hardware failure or a software bug in the operating system. In many cases, the operating system is capable of continued operation after an error has occurred. However, the system is in an unstable state and rather than risking security breaches and data corruption, the operating system stops to prevent further damage and facilitate diagnosis of the error and, in usual cases, restart.
After recompiling a kernel binary image from source code, a kernel panic during booting the resulting kernel is a common problem if the kernel was not correctly configured, compiled or installed. Add-on hardware or malfunctioning RAM could also be sources of fatal kernel errors during start up, due to incompatibility with the OS or a missing device driver. A kernel may also die with a panic message if it is unable to locate a root file system. During the final stages of kernel userspace initialization, a panic is typically triggered if the spawning of init fails, as the system would then be unusable.
If you’re seeing repeated kernel panics, try the following things until they go away.
Do a safe boot: Restart your Mac and hold down the Shift key until you see the gray Apple logo. Doing so temporarily disables some software that could cause problems and runs some cleanup processes. If the kernel panic doesn’t recur, restart again normally.
Update your software: Outdated software is frequently implicated in kernel panics. This may include OS X itself and, very rarely, regular applications. More often it involves low-level software like kernel extensions and drivers. If you’ve installed software that goes with peripherals (network adapters, audio interfaces, graphics cards, input devices, etc.) or antivirus, file-system, or screen-capture tools, those should be the first you check for newer versions. Choose Software Update from the Apple menu to update OS X, Apple apps, and items purchased from the Mac App Store; for other apps, use a built-in updater or check the developer’s website.
Update your firmware: Software Update may also tell you about available updates for your Mac. If so, be sure to install them. You can also check for any firmware updates applicable to your Mac model at http://support.apple.com/kb/ht1237.
Check your disk: Make sure your startup disk has at least 10GB of free space; if it doesn’t, delete some files to make room. Next, to find and fix any disk errors, start from another volume, run Disk Utility, select your startup disk, and click Repair Disk. (The easiest way to do this, if you’re running OS X 10.7 or later, is to restart and then immediately press and hold Command-R to enter OS X Recovery. If that doesn’t work, or if you have an older system, you can start up from a bootable duplicate of your hard disk or OS X install media.)
Check peripherals: If kernel panics continue, shut down your Mac and disconnect everything except the bare minimum (keyboard, pointing device, and display if those aren’t built in)—as well as any hardware you’ve added inside your Mac, such as a graphics card. Turn your Mac back on. If the problem doesn’t reappear, repeat the process, reattaching one device at a time. If you see a kernel panic right after connecting a piece of hardware, that may be your culprit.
Check your RAM: Defective RAM can cause kernel panics, and sometimes these defects manifest themselves only after time. If you’ve added any after-market RAM, try turning off your Mac, removing the extra RAM, and restarting. If that makes the kernel panics disappear, contact the company that sold you the RAM to see about a warranty replacement.
Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). Clouds can be classified as public, private or hybrid.
Overview
Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.
Cloud computing, or in simpler shorthand just “the cloud”, also focuses on maximizing the effectiveness of the shared resources. Cloud resources are usually not only shared by multiple users but are also dynamically reallocated per demand. This can work for allocating resources to users. For example, a cloud computer facility that serves European users during European business hours with a specific application (e.g., email) may reallocate the same resources to serve North American users during North America’s business hours with a different application (e.g., a web server). This approach should maximize the use of computing power thus reducing environmental damage as well since less power, air conditioning, rackspace, etc. are required for a variety of functions. With cloud computing, multiple users can access a single server to retrieve and update their data without purchasing licenses for different applications.
The term “moving to cloud” also refers to an organization moving away from a traditional CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to theOPEX model (use a shared cloud infrastructure and pay as one uses it).
Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of on infrastructure. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a “pay as you go” model. This can lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model.
The present availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing have led to a growth in cloud computing.
Cloud vendors are experiencing growth rates of 50% per annum.
A virtual private server (VPS) is a virtual machine sold as a service by an Internet hosting service.
A VPS runs its own copy of an operating system, and customers have superuser-level access to that operating system instance, so they can install almost any software that runs on that OS. For many purposes they are functionally equivalent to a dedicated physical server, and being software defined are able to be much more easily created and configured. They are priced much lower than an equivalent physical server, but as they share the underlying physical hardware with other VPSs, performance may be lower, and may depend on the workload of other instances on the same hardware node.
Hosting
Main article: Comparison of platform virtual machines
Many companies offer virtual private server hosting or virtual dedicated server hosting as an extension for web hosting services. There are several challenges to consider when licensing proprietary software in multi-tenant virtual environments.
With unmanaged or self managed hosting, the customer is left to administer his own server instance.
Unmetered hosting is generally offered with no limit on the amount of data-transferred on a fixed bandwidth line. Usually, unmetered hosting[2] is offered with 10 Mbit/s, 100 Mbit/s or 1000 Mbit/s (with some as high as 10Gbit/s). This means that the customer is theoretically able to use 3.33~ TB on 10 Mbit/s, 33~ TB on 100 Mbit/s and 333~ TB on a 1000 Mbit/s line per month (although in practice the values will be significantly less). In a virtual private server, this will be shared bandwidth and (should) mean there is a fair usage policy involved. Unlimited hosting is also commonly marketed but generally limited by acceptable usage policies and terms of service. Offers of unlimited disk space and bandwidth are always false due to cost, carrier capacities and technological boundaries.