Wednesday, October 14, 2015








Thursday, August 20, 2015

How to enable Autologin to Linux console using mingetty

Posted by Nikesh Jauhari
The mingetty program is a lightweight, minimalist getty program foruse only on virtual consoles. Mingetty is not suitable for serial lines (you should use the mgetty program instead for that purpose)

If you got the mingetty program, your /etc/inittab file will look like ....
1:2345:respawn:/sbin/mingetty --noclear tty1
2:2345:respawn:/sbin/mingetty tty2
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6
The first field says that this is the line for /dev/tty1. The second field says that it applies to run levels 2, 3, 4, and 5. The third field means that the command should be run again, after it exits (so that one can log in, log out, and then log in again). The last field is the command that runs mingetty on the first virtual terminal.

The above series of lines also use the "respawn" option to keep six mingetty processes running on the system. If someone tries to kill one of these processes as root, the process will simply be respawned. Only critical processes are set up in this way to keep them safe from anything else happening on the system.

If you're curious about these processes, check them out on the running system using command

$ ps -ef | grep getty

Now to enable the auto loging for a particular user, edit the /etc/inittab file and identify the terminal on which you want user to have auto login ...
1:12345:respawn:/sbin/mingetty --noclear --autologin username tty1
Reboot the system after making above changes and insure that init has spawned the new version of mingetty, and if all is well, will automatically log you on to the console.


Read more: http://linuxpoison.blogspot.com/2010/08/how-to-enable-autologin-to-linux.html#ixzz3jKuXyk3e

Wednesday, August 19, 2015


Linux Useful Questions

Let’s say you maintains a backup on regular basis for the company you are working. The backups are maintained in Compressed file format. You need to examine a log, two months old. What would you suggest without decompressing the compressed file?
Answer : To check the contents of a compressed file without the need of decompressing it, we need to use ‘zcat’. The zcat utility makes it possible to view the contents of a compressed file.

You need to track events on your system. What will you do?
Answer : For tracking the events on the system, we need a daemon called syslogd. The syslogd daemon is useful in tracking the information of system and then saving it to specified log files.

How will you restrict IP so that the restricted IP’s may not use the FTP Server?
Answer : We can block suspicious IP by integrating tcp_wrapper. We need to enable the parameter “tcp_wrapper=YES” in the configuration file at ‘/etc/vsftpd.conf’. And then add the suspicious IP in the ‘host.deny’ file at location ‘/etc/host.deny’.



What is the permission of /etc/passwd and how can user change the password
How look shared library dependencies?
How trace system call and signal?
How profile app?
How print the strings of printable characters in files?
What fields are stored in an inode?
What is nscd?
What is Automake and Autoconf?
What steps to add a user to a system without using useradd/adduser?
How see look information about ELF files?
What is MAJOR and MINOR numbers of special files?
How link layer filtering?

Question: Describe a scenario when you get a "filesystem is full" error, but 'df' shows there is free space

Answer: The filesystem can run out of inodes, 'df -i' will show that.
Tell me two ways to redirect both stderr and stdin at once
&> and >/dev/null 2>&1
demonstrates knowledge of bash vs. bourne shell

I just ran 'chmod -x /bin/chmod'. What did I do? How do I recover?

Describe TCP's handshake process.

How does traceroute work?

When might you need to use CTRL-Z or CTRL-D?

What does the sticky bit do?

What kernel options might you need to tune?

How do you tell what distribution you're running?

How do you tell what hardware you're running on?

What is the difference between a SAN, a NAS and local disk?

I have 30 servers and I'm not sure if each has the same apache config. How do I find out how many copies there are and what the differences are?

What's a chroot jail?

How do you tell if you've been hacked?

Name all the two letter unix commands you can think of and what they do? How you could look up all the two letter unix commands on your system.

If you were stuck on a desert island with only 5 command-line utilities, which would you choose?
date whoami echo sleep kill



Q.What is Nagios and how it Works ?.
Ans:Nagios is an open source System and Network Monitoring application.Nagios runs on a server, usually as a daemon or service. Nagios periodically run plugins residing (usually) on the same server, they contact (PING etc.) hosts and servers on your network or on the Internet. You can also have information sent to Nagios. You then view the status information using the web interface. You can also receive email or SMS notifications if something happens. Event Handlers can also be configured to "act" if something happens.
The Nagios daemon behaves like a scheduler that runs certain scripts at certain moments. It stores the results of those scripts and will run other scripts if these results change. All these scripts are, of course, the scripts from the Nagios plug-in project or are scripts that you have created.
Q.Explain Main Configuration file and its location?
Ans:1.Resource File : It is used to store sensitive information like username,passwords with out making them available to the CGIs.
2.Object Definition Files: It is the location were you define all you want to monitor and how you want to monitor. It is used to Define hosts,services, hostgroups, contacts, contact groups, commands, etc
3.CGI Configuration File : The CGI configuration file contains a number of directives that affect the operation of the CGIs. It also contains a reference the main configuration file, so the CGIs know how you've configured Nagios and where your object definitions are stored.
Q.Explain Ngaios files and its location?
1.log_file=/usr/local/nagios/var/nagios.log
The main configuration file is usually named nagios.cfg and located in the /usr/local/nagios/etc/ directory.
2.Object Configuration File :This directive is used to specify an object configuration file containing object definitions that Nagios should use for monitoring.
cfg_file=/usr/local/nagios/etc/hosts.cfg
cfg_file=/usr/local/nagios/etc/services.cfg
cfg_file=/usr/local/nagios/etc/commands.cfg
3.Object Configuration Directory :This directive is used to specify a directory which contains object configuration files that Nagios should use for monitoring.
cfg_dir=/usr/local/nagios/etc/commands
cfg_dir=/usr/local/nagios/etc/services
cfg_dir=/usr/local/nagios/etc/hosts
4.Object Cache File :This directive is used to specify a file in which a cached copy of object definitions should be stored.
object_cache_file=/usr/local/nagios/var/objects.cache
5.Precached Object File: precached_object_file=/usr/local/nagios/var/objects.precache
This is used to specify an optional resource file that can contain $USERn$ macro definitions. $USERn$ macros are useful for storing usernames, passwords, and items commonly used in command definitions (like directory paths). The CGIs will not attempt to read resource files, so you can set restrictive permissions (600 or 660) on them to protect sensitive information. You can include multiple resource files by adding multiple resource_file statements to the main config file - Nagios will process them all.
6.Temp File :temp_path=/tmp
This is a directory that Nagios can use as scratch space for creating temporary files used during the monitoring process. You should run tmpwatch, or a similiar utility, on this directory occasionally to delete files older than 24 hours.
7.Status File :status_file=/usr/local/nagios/var/status.dat
This is the file that Nagios uses to store the current status, comment, and downtime information. This file is used by the CGIs so that current monitoring status can be reported via a web interface. The CGIs must have read access to this file in order to function properly. This file is deleted every time Nagios stops and recreated when it starts.
8.Log Archive Path :log_archive_path=/usr/local/nagios/var/archives/
This is the directory where Nagios should place log files that have been rotated. This option is ignored if you choose to not use the log rotation functionality.
9.External Command File :command_file=/usr/local/nagios/var/rw/nagios.cmd
This is the file that Nagios will check for external commands to process. The command CGI writes commands to this file. The external command file is implemented as a named pipe (FIFO), which is created when Nagios starts and removed when it shuts down. If the file exists when Nagios starts, the Nagios process will terminate with an error message
10.Lock File :lock_file=/tmp/nagios.lock
This option specifies the location of the lock file that Nagios should create when it runs as a daemon (when started with the -d command line argument). This file contains the process id (PID) number of the running Nagios process.
11.State Retention File: state_retention_file=/usr/local/nagios/var/retention.dat
This is the file that Nagios will use for storing status, downtime, and comment information before it shuts down. When Nagios is restarted it will use the information stored in this file for setting the initial states of services and hosts before it starts monitoring anything. In order to make Nagios retain state information between program restarts, you must enable the retain_state_information option.
12.Check Result Path :check_result_path=/var/spool/nagios/checkresults
This options determines which directory Nagios will use to temporarily store host and service check results before they are processed. This directory should not be used to store any other files, as Nagios will periodically clean this directory of old file (see the max_check_result_file_age option for more information).
13.Host Performance Data File :host_perfdata_file=/usr/local/nagios/var/host-perfdata.da.
This option allows you to specify a file to which host performance data will be written after every host check. Data will be written to the performance file as specified by the host_perfdata_file_template option. Performance data is only written to this file if the process_performance_data option is enabled globally and if the process_perf_data directive in the host definition is enabled.
14.Service Performance Data File:service_perfdata_file=/usr/local/nagios/var/service-perfdata.dat
This option allows you to specify a file to which service performance data will be written after every service check. Data will be written to the performance file as specified by the service_perfdata_file_template option. Performance data is only written to this file if the process_performance_data option is enabled globally and if the process_perf_data directive in the service definition is enabled
13.Debug File :debug_file=/usr/local/nagios/var/nagios.debug
This option determines where Nagios should write debugging information. What (if any) information is written is determined by the debug_level and debug_verbosity options. You can have Nagios automaticaly rotate the debug file when it reaches a certain size by using the max_debug_file_size option.
Q. Explain Host and Service Check Execution Option?
Ans:This option determines whether or not Nagios will execute Host/service checks when it initially (re)starts. If this option is disabled, Nagios will not actively execute any service checks and will remain in a sort of "sleep" mode (it can still accept passive checks unless you've disabled them). This option is most often used when configuring backup monitoring servers or when setting up a distributed monitoring environment. Note: If you have state retention enabled, Nagios will ignore this setting when it (re)starts and use the last known setting for this option (as stored in the state retention file), unless you disable the use_retained_program_state option. If you want to change this option when state retention is active (and the use_retained_program_state is enabled), you'll have to use the appropriate external command or change it via the web interface. Values are as follows:
0 = Don't execute host/service checks
1 = Execute host/service checks (default)
Q. Explain active and Passive check in Nagios?
Ans:Nagios will monitor host and services in tow ways actively and passively.Active checks are the most common method for monitoring hosts and services. The main features of actives checks as as follows:Active checks are initiated by the Nagios process
A. Active checks:
1.Active checks are run on a regularly scheduled basis
2.Active checks are initiated by the check logic in the Nagios daemon.
When Nagios needs to check the status of a host or service it will execute a plugin and pass it information about what needs to be checked. The plugin will then check the operational state of the host or service and report the results back to the Nagios daemon. Nagios will process the results of the host or service check and take appropriate action as necessary (e.g. send notifications, run event handlers, etc).
Active check are executed At regular intervals, as defined by the check_interval and retry_interval options in your host and service definitions
On-demand as needed.Regularly scheduled checks occur at intervals equaling either the check_interval or the retry_interval in your host or service definitions, depending on what type of state the host or service is in. If a host or service is in a HARD state, it will be actively checked at intervals equal to the check_interval option. If it is in a SOFT state, it will be checked at intervals equal to the retry_interval option.
On-demand checks are performed whenever Nagios sees a need to obtain the latest status information about a particular host or service. For example, when Nagios is determining the reach ability of a host, it will often perform on-demand checks of parent and child hosts to accurately determine the status of a particular network segment. On-demand checks also occur in the predictive dependency check logic in order to ensure Nagios has the most accurate status information.
b.Passive checks:
They key features of passive checks are as follows:
1.Passive checks are initiated and performed external applications/processes
2.Passive check results are submitted to Nagios for processing
The major difference between active and passive checks is that active checks are initiated and performed by Nagios, while passive checks are performed by external applications.
Passive checks are useful for monitoring services that are:
Asynchronous in nature and cannot be monitored effectively by polling their status on a regularly scheduled basis
Located behind a firewall and cannot be checked actively from the monitoring host
Examples of asynchronous services that lend themselves to being monitored passively include SNMP traps and security alerts. You never know how many (if any) traps or alerts you'll receive in a given time frame, so it's not feasible to just monitor their status every few minutes.Passive checks are also used when configuring distributed or redundant monitoring installations.
Here's how passive checks work in more detail...
1.An external application checks the status of a host or service.
2.The external application writes the results of the check to the external command file.
3.The next time Nagios reads the external command file it will place the results of all passive checks into a queue for later processing. The same queue that is used for storing results from active checks is also used to store the results from passive checks.
4.Nagios will periodically execute a check result reaper event and scan the check result queue. Each service check result that is found in the queue is processed in the same manner - regardless of whether the check was active or passive. Nagios may send out notifications, log alerts, etc. depending on the check result information.
Q.What Are Objects?
Ans:Objects are all the elements that are involved in the monitoring and notification logic. Types of objects include:
Services :are one of the central objects in the monitoring logic. Services are associated with hosts Attributes of a host (CPU load, disk usage, uptime, etc.)
Service Groups :are groups of one or more services. Service groups can make it easier to (1) view the status of related services in the Nagios web interface and (2) simplify your configuration through the use of object tricks.
Hosts :are one of the central objects in the monitoring logic.Hosts are usually physical devices on your network (servers, workstations, routers, switches, printers, etc).
Host Groups :are groups of one or more hosts. Host groups can make it easier to (1) view the status of related hosts in the Nagios web interface and (2) simplify your configuration through the use of object tricks
Contacts :Conact information of people involved in the notification process
Contact Groups :are groups of one or more contacts. Contact groups can make it easier to define all the people who get notified when certain host or service problems occur.
Commands :are used to tell Nagios what programs, scripts, etc. it should execute to perform ,Host and service checks and when Notifications should send etc.
Time Periods: are are used to control ,When hosts and services can be monitored
Notification Escalations :Use for escalating the the notication
Q.What Are Plugins?
Ans:Plugins are compiled executables or scripts (Perl scripts, shell scripts, etc.) that can be run from a command line to check the status or a host or service. Nagios uses the results from plugins to determine the current status of hosts and services on your network.
Nagios will execute a plugin whenever there is a need to check the status of a service or host. The plugin does something (notice the very general term) to perform the check and then simply returns the results to Nagios. Nagios will process the results that it receives from the plugin and take any necessary actions (running event handlers, sending out notifications, etc).
Q.How Do I Use Plugin X?
Ans:Most all plugins will display basic usage information when you execute them using '-h' or '--help' on the command line. For example, if you want to know how the check_http plugin works or what options it accepts, you should try executing the following command:
./check_http --help
Q.Explain External Commands ?
Ans:Nagios can process commands from external applications (including the CGIs) and alter various aspects of its monitoring functions based on the commands it receives. External applications can submit commands by writing to the command file, which is periodically processed by the Nagios daemon.External commands can be used to accomplish a variety of things while Nagios is running. Example of what can be done include temporarily disabling notifications for services and hosts, temporarily disabling service checks, forcing immediate service checks, adding comments to hosts and services, etc
Q.When Does Nagios Check For External Commands?
Ans:At regular intervals specified by the command_check_interval option in the main configuration file
Immediately after event handlers are executed. This is in addition to the regular cycle of external command checks and is done to provide immediate action if an event handler submits commands to Nagios.
External commands that are written to the command file have the following format
[time] command_id;command_arguments
where time is the time (in time_t format) that the external application submitted the external command to the command file. The values for the command_id and command_arguments arguments will depend on what command is being submitted to Nagios.
Q.Explain Nagios State Types?
Ans:The current state of monitored services and hosts is determined by two components:
The status of the service or host (i.e. OK, WARNING, UP, DOWN, etc.)
Tye type of state the service or host is in
There are two state types in Nagios - SOFT states and HARD states. These state types are a crucial part of the monitoring logic, as they are used to determine when event handlers are executed and when notifications are initially sent out.
a.Soft States:
When a service or host check results in a non-OK or non-UP state and the service check has not yet been (re)checked the number of times specified by the max_check_attempts directive in the service or host definition. This is called a soft error.
When a service or host recovers from a soft error. This is considered a soft recovery.
The following things occur when hosts or services experience SOFT state changes:
The SOFT state is logged. Event handlers are executed to handle the SOFT state. SOFT states are only logged if you enabled the log_service_retries or log_host_retries options in your main configuration file.
The only important thing that really happens during a soft state is the execution of event handlers. Using event handlers can be particularly useful if you want to try and proactively fix a problem before it turns into a HARD state. The $HOSTSTATETYPE$ or $SERVICESTATETYPE$ macros will have a value of "SOFT" when event handlers are executed, which allows your event handler scripts to know when they should take corrective action.
b.Hard states :occur for hosts and services in the following situations:
When a host or service check results in a non-UP or non-OK state and it has been (re)checked the number of times specified by the max_check_attempts option in the host or service definition. This is a hard error state.
When a host or service transitions from one hard error state to another error state (e.g. WARNING to CRITICAL).
When a service check results in a non-OK state and its corresponding host is either DOWN or UNREACHABLE.
When a host or service recovers from a hard error state. This is considered to be a hard recovery.
When a passive host check is received. Passive host checks are treated as HARD unless the passive_host_checks_are_soft option is enabled.
The following things occur when hosts or services experience HARD state changes:
The HARD state is logged.
Event handlers are executed to handle the HARD state.
Contacts are notifified of the host or service problem or recovery.
The $HOSTSTATETYPE$ or $SERVICESTATETYPE$ macros will have a value of "HARD" when event handlers are executed, which allows your event handler scripts to know when they should take corrective action.
Q.What is State Stalking?
Ans:Stalking is purely for logging purposes.When stalking is enabled for a particular host or service, Nagios will watch that host or service very carefully and log any changes it sees in the output of check results. As you'll see, it can be very helpful to you in later analysis of the log files. Under normal circumstances, the result of a host or service check is only logged if the host or service has changed state since it was last checked. There are a few exceptions to this, but for the most part, that's the rule.
If you enable stalking for one or more states of a particular host or service, Nagios will log the results of the host or service check if the output from the check differs from the output from the previous check.
Q.Explain how Flap Detection works in Nagios?
Ans:Nagios supports optional detection of hosts and services that are "flapping". Flapping occurs when a service or host changes state too frequently, resulting in a storm of problem and recovery notifications. Flapping can be indicative of configuration problems (i.e. thresholds set too low), troublesome services, or real network problems.
Whenever Nagios checks the status of a host or service, it will check to see if it has started or stopped flapping. It does this by:
a.Storing the results of the last 21 checks of the host or ser vice
b.Analyzing the historical check results and determine where state changes/transitions occur
c.Using the state transitions to determine a percent state change value (a measure of change) for the host or service
d.Comparing the percent state change value against low and high flapping thresholds
e.A host or service is determined to have started flapping when its percent state change first exceeds a high flapping threshold.
A host or service is determined to have stopped flapping when its percent state goes below a low flapping threshold (assuming that is was previously flapping).
The historical service check results are examined to determine where state changes/transitions occur. State changes occur when an archived state is different from the archived state that immediately precedes it chronologically. Since we keep the results of the last 21 service checks in the array, there is a possibility of having at most 20 state changes. In this example there are 7 state changes, indicated by blue arrows in the image above.
The flap detection logic uses the state changes to determine an overall percent state change for the service. This is a measure of volatility/change for the service. Services that never change state will have a 0% state change value, while services that change state each time they're checked will have 100% state change. Most services will have a percent state change somewhere in between.
Q.Explain Distributed Monitoring ?
Ans:Nagios can be configured to support distributed monitoring of network services and resources.
When setting up a distributed monitoring environment with Nagios, there are differences in the way the central and distributed servers are configured.
The function of a distributed server is to actively perform checks all the services you define for a "cluster" of hosts. it basically just mean an arbitrary group of hosts on your network. Depending on your network layout, you may have several cluters at one physical location, or each cluster may be separated by a WAN, its own firewall, etc. There is one distributed server that runs Nagios and monitors the services on the hosts in each cluster. A distributed server is usually a bare-bones installation of Nagios. It doesn't have to have the web interface installed, send out notifications, run event handler scripts, or do anything other than execute service checks if you don't want it to.
The purpose of the central server is to simply listen for service check results from one or more distributed servers. Even though services are occasionally actively checked from the central server, the active checks are only performed in dire circumstances,

Q.What is NRPE?
Ans: The NRPE addon is designed to allow you to execute Nagios plugins on remote Linux/Unix machines. The main
reason for doing this is to allow Nagios to monitor "local" resources (like CPU load, memory usage, etc.) on remote machines. Since these public resources are not usually exposed to external machines, an agent like NRPE must be installed on the remote Linux/Unix machines.
The NRPE addon consists of two pieces:
– The check_nrpe plugin, which resides on the local monitoring machine
– The NRPE daemon, which runs on the remote Linux/Unix machine
When Nagios needs to monitor a resource of service from a remote Linux/Unix machine:
– Nagios will execute the check_nrpe plugin and tell it what service needs to be checked
– The check_nrpe plugin contacts the NRPE daemon on the remote host over an (optionally) SSL-protected
connection
– The NRPE daemon runs the appropriate Nagios plugin to check the service or resource
– The results from the service check are passed from the NRPE daemon back to the check_nrpe plugin, which
then returns the check results to the Nagios process.
Q.What is NNDDOOUUTTIILLSS ?
Ans:The NDOUTILS addon is designed to store all configuration and event data from Nagios in a database. Storing information from Nagios in a database will allow for quicker retrieval and processing of that data and will help serve as a foundation for the development of a new PHP-based web interface in Nagios 3.0.
MySQL databases are currently supported by the addon and PostgreSQL support is in development.
The NDOUTILS addon was designed to work for users who have:
– Single Nagios installations
– Multiple standalone or "vanilla" Nagios installations
– Multiple Nagios installations in distributed, redundant, and/or failover environments.

Each Nagios process, whether it is a standalong monitoring server, or part of a distributed, redundant, or failover monitoring setup, is referred to as an "instance". In order to maintain the integrity of stored data, each Nagios instance must be labeled with a unique identifier or name.
Q.What are the components that make up the NDO utilities ?
Ans:There are four main components that make up the NDO utilities:
1. NDOMOD Event Broker Module :The NDO utilities includes a Nagios event broker module (NDOMOD.O) that exports data from the Nagios daemon.Once the module has been loaded by the Nagios daemon, itcan access all of the data and logic present in the running Nagios process.The NDOMOD module has been designed to export configuration data, as well as information about various runtime events that occur in the monitoring process, from the Nagios daemon. The module can send this data to a standard file, a Unix domain socket, or a TCP socket.
2. LOG2NDO Utility :The LOG2NDO utility has been designed to allow you to import historical Nagios and NetSaint log files into a database via the NDO2DB daemon (described later). The utility works by sending historical log file data to a standard file, a Unix domain socket, or a TCP socket in a format the NDO2DB daemon understands. The NDO2DB daemon can then be used to process that output and store the historical logfile information in a database.
3. FILE2SOCK Utility :The FILE2SOCK utility is quite simple. Its reads input from a standard file (or STDIN) and writes all of that data to either a Unix domain socket or TCP socket. The data that is read is not processed in any way before it is sent to the socket.
4. NDO2DB Daemon:The NDO2DB utility is designed to take the data output from the NDOMOD and LOG2NDO components and store it in a MySQL or PostgreSQL database.When it starts, the NDO2DB daemon creates either a TCP or Unix domain socket and waits for clients to connect. NDO2DB can run either as a standalone, multi-process daemon or under INETD (if using a TCP socket).Multiple clients can connect to the NDO2DB daemon's socket and transmit data simultaneously. A seperate NDO2DB process is spawned to handle each new client that connects. Data is read from each client and stored in a user-specified database for later retrieval and processing.


Linux Boot Process



1. BIOS

Once the control goes to BIOS it will take care of two things

· Run POST operation.
•Selecting first Boot device.

POST operation: POST is a process of checking hardware availability. BIOS will have a list of all devices which are present in previous system boot. In order to check if hardware is available for the present booting or not it will send an electric pulse to each and every device in the list that it already have. If an electrical pulse is returned from that device it will come to a conclusion the hardware is working fine and ready for use. If it does not receive a single from a particular device it will treat that device as faulty or it was removed from the system. If any new hardware is attached to the system it will do the same operation to find if it’s available or not. The new list will be stored in BIOS memory for next boot.
Unlike the main RAM chip, CMOS RAM does not flush its memory when a computer is turned off. It remembers the entire configuration with the help of a battery called CMOS battery.



POST check will confirm the integrity of the following devices

◦Timer IC's
◦DMA controllers
◦CPU
◦Video ROM
◦Motherboard
◦Keyboard
◦Printer port
◦Hard Drive etc


Selecting First Boot Device: Once the POST is completed BIOS will have the list of devices available. BIOS memory will have the next steps details like what is the first boot device it has to select etc. It will select the first boot device and gives back the control to Processor (CPU). Suppose if it does not find first boot device, it will check for next boot device, if not third and so on. If BIOS do not find any boot device it will alert user stating "No boot device found".
Commands
To find BIOS details: #dmidecode -t 0



2. MBR

Once the BIOS gives control back to CPU, it will try to load MBR of the first boot device (We will consider it as HDD). MBR is a small part of Hard Disk with just a size of 512 Bytes, I repeat its just 512 Bytes. This MBR resides at the starting of HDD or end of HDD depending on manufacturer.

What is MBR?

MBR (Master Boot recorder) is a location on disk which has details about

· Primary boot loader code(This is of 446 Bytes)

· Partition table information(64 Bytes)

· Magic number(2 Bytes)

Which will be equal to 512B (446+64+2)B.

Primary Boot loader code: This code provides boot loader information and location details of actual boot loader code on the hard disk. This is helpful for CPU to load second stage of Boot loader.

Partition table: MBR contains 64 bytes of data which stores Partition table information such as what is the start and end of each partition, size of partition, type of partition (Whether it's a primary or extended etc). As we all know HDD support only 4 partitions, this is because of the limitation of its information in MBR. For a partition to represent in MBR, it requires 16 Bytes of space in it so at most we will get 4 partitions

Magic Number: The magic number service as validation check for MBR. If MBR gets corrupted this magic number is used to retrieve it.

Commands:
To see MBR details
[root@xxxxxxxx ~] #dd if=/dev/sda of=mbr.bin bs=512 count=1
[root@xxxxxxxx ~]# file mbr.bin

mbr.bin: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x41438, GRUB version 0.94; partition 1: ID=0x83, active, starthead 32, startsector 2048, 2097152 sectors; partition 2: ID=0x8e, starthead 170, startsector 2099200, 81786880 sectors, code offset 0x48


3. GRUB

GRUB stage 1:

The primary boot loader takes up less than 512 bytes of disk space in the MBR - too small a space to contain the instructions necessary to load a complex operating system.

Instead the primary boot loader performs the function of loading either the stage 1.5 or stage 2 boot loader.

GRUB Stage 1.5:

Stage 1 can load the stage 2 directly, but it is normally set up to load the stage 1.5.

This can happen when the /boot partition is situated beyond the 1024 cylinder head of the hard drive.

GRUB Stage 1.5 is located in the first 30 KB of Hard Disk immediately after MBR and before the first partition.

This space is utilized to store file system drivers and modules.

This enabled stage 1.5 to load stage 2 to load from any known location on the file system i.e. /boot/grub





GRUB Stage 2:

This is responsible for loading kernel from /boot/grub/grub.conf and any other modules needed

Loads a GUI interface i.e. splash image located at /grub/splash.xpm.gz with list of available kernels where you can manually select the kernel or else after the default timeout value the selected kernel will boot

The original file is /etc/grub.conf of which you can observe a symlink file at /boot/grub/grub.conf

Sample /boot/grub/grub.conf
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.18-194.26.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.26.1.el5 ro root=/dev/VolGroup00/root clocksource=acpi_pm divisor=10
initrd /initrd-2.6.18-194.26.1.el5.img
title Red Hat Enterprise Linux Server (2.6.18-194.11.4.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.11.4.el5 ro root=/dev/VolGroup00/root clocksource=acpi_pm divisor=10
initrd /initrd-2.6.18-194.11.4.el5.img
title Red Hat Enterprise Linux Server (2.6.18-194.11.3.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.11.3.el5 ro root=/dev/VolGroup00/root clocksource=acpi_pm divisor=10
initrd /initrd-2.6.18-194.11.3.el5.img




4. Kernel
This can be considered the heart of operating system responsible for handling all system processes.

Kernel is loaded in the following stages:

1. Kernel as soon as it is loaded configures hardware and memory allocated to the system.

2. Next it uncompresses the initrd image (compressed using zlib into zImage or bzImage formats) and mounts it and loads all the necessary drivers.

3. Loading and unloading of kernel modules is done with the help of programs like insmod, and rmmod present in the initrd image.

4. Looks out for hard disk types be it a LVM or RAID.

5. Unmounts initrd image and frees up all the memory occupied by the disk image.

6. Then kernel mounts the root partition as specified in grub.conf as read-only.

7. Next it runs the init process

The initrd is created to make sure that drivers can load on your server.
Commands
To find kernel version in linux: #uname -r

5. Init

Looks at the /etc/inittab file to decide the Linux run level.

Following are the available run levels

0 - halt

1 – Single user mode

2 – Multiuser, without NFS

3 – Full multiuser mode

4 – unused

5 – X11

6 – Reboot

Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate programs.

Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level

If you want to get into trouble, you can set the default run level to 0 or 6. Since you know what 0 and 6 means, probably you might not do that.

Typically you would set the default run level to either 3 or 5.

Next as per the fstab entry file system's integrity is checked and root partition is re-mounted as read-write (earlier it was mounted as read-only).










6. Runlevel programs
•When the Linux system is booting up, you might see various services getting started. For example, it might say “starting sendmail …. OK”. Those are the runlevel programs, executed from the run level directory as defined by your run level.
•Depending on your default init level setting, the system will execute the programs from one of the following directories.
◦Run level 0 – /etc/rc.d/rc0.d/
◦Run level 1 – /etc/rc.d/rc1.d/
◦Run level 2 – /etc/rc.d/rc2.d/
◦Run level 3 – /etc/rc.d/rc3.d/
◦Run level 4 – /etc/rc.d/rc4.d/
◦Run level 5 – /etc/rc.d/rc5.d/
◦Run level 6 – /etc/rc.d/rc6.d/
•Please note that there are also symbolic links available for these directory under /etc directly. So, /etc/rc0.d is linked to /etc/rc.d/rc0.d.
•Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and K.
•Programs starts with S are used during startup. S for startup.
•Programs starts with K are used during shutdown. K for kill.
•There are numbers right next to S and K in the program names. Those are the sequence number in which the programs should be started or killed.
•For example, S12syslog is to start the syslog deamon, which has the sequence number of 12. S80sendmail is to start the sendmail daemon, which has the sequence number of 80. So, syslog program will be started before sendmail.


Mingetty :- sbin/mingetty

Ref:- http://www.golinuxhub.com/2014/03/step-by-step-linux-boot-process.html

Thursday, March 19, 2015


GOOD Reads Sysadmin Specific:
• Featured Questions - Super User - http://superuser.com/feeds/featured
• Schneier on Security - http://feeds.feedburner.com/schneier/fulltext
• Featured Questions - Server Fault - http://serverfault.com/feeds/featured
• 4sysops - http://4sysops.com/feed/
• Everything Sysadmin - http://everythingsysadmin.com/atom.xml
• blog.scottlowe.org - http://feeds.scottlowe.org/slowe/content/feed/
• Standalone Sysadmin - http://feeds.feedburner.com/standalone-sysadmin
• Packet Life - http://packetlife.net/blog/feed/
• Justin's IT Blog - http://jpaul.me/?feed=rss2
• The Lone Sysadmin - http://feeds.feedburner.com/lonesysadmin/mkpe
• The Nubby SysAdmin - http://feeds.feedburner.com/TheNubbyAdmin
• My SysAd Blog -- Unix - http://feeds.feedburner.com/sysad
• Coding Horror - http://feeds.feedburner.com/codinghorror/
• Cliff Saran’s Enterprise blog - http://www.computerweekly.com/blogs/it-fud-blog/atom.xml
• When a shell is not enough - http://www.shellguardians.com/feeds/posts/default
• Jon Skeet: Coding Blog - http://feeds.feedburner.com/JonSkeetCodingBlog
• UNIX System Administration: Solaris, AIX, HP-UX, Tru64, BSD. - http://feeds.feedburner.com/unixsadm
• Linux Sysadmin Blog - http://feeds.feedburner.com/LinuxSystemAdminsBlog
General Technology:
• Slashdot - News for nerds, stuff that matters - http://rss.slashdot.org/Slashdot/slashdot
• The Daily WTF - http://syndication.thedailywtf.com/TheDailyWtf
• Official Google Blog - http://feeds.feedburner.com/blogspot/MKuf
• Royal Pingdom - http://feeds.feedburner.com/royalpingdom

Tuesday, August 19, 2014

Setup DNS server on Linux

DNS (Domain Name System) is the core component of network infrastructure. The DNS service resolves hostname into ip address and vice versa. For example if we type http://www.ostechnix.com in browser, the DNS server translates the domain name into its corresponding ip address. So it makes us easy to remember the domain names instead of its ip address.
DNS Server Installation in CentOS 6.5
This how-to tutorial will show you how to install and configure Primary and Secondary DNS server. The steps provided here were tested in CentOS 6.5 32 bit edition, but it should work in RHEL 6.x(x stands for version) and Scientific Linux 6.x too.
Scenario
Here are my test setup scenario
[A] Primary(Master) DNS Server Details:

Operating System : CentOS 6.5 32 bit (Minimal Server)
Hostname : masterdns.ostechnix.com
IP Address : 192.168.1.200/24

[B] Secondary(Slave) DNS Server Details:

Operating System : CentOS 6.5 32 bit (Minimal Server)
Hostname : slavedns.ostechnix.com
IP Address : 192.168.1.201/24

Setup Primary(Master) DNS Server

[root@masterdns ~]# yum install bind* -y

1. Configure DNS Server
The main configuration of the DNS will look like below. Edit and add the entries below which were marked as bold in this configuration files.

[root@masterdns ~]# vi /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
options {
listen-on port 53 { 127.0.0.1; 192.168.1.200;}; ## Master DNS IP ##
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { localhost; 192.168.1.0/24; }; ## IP Range ##
allow-transfer{ localhost; 192.168.1.201; }; ## Slave DNS IP ##
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
zone"ostechnix.com" IN {
type master;
file "fwd.ostechnix.com";
allow-update { none; };
};
zone"1.168.192.in-addr.arpa" IN {
type master;
file "rev.ostechnix.com";
allow-update { none; };
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

2. Create Zone files
Now we should create forward and reverse zone files which we mentioned in the ‘/etc/named.conf’ file.
[A] Create Forward Zone
Create ‘fwd.ostechnix.com’ file in the ‘/var/named’ directory and add the entries for forward zone as shown below.

[root@masterdns ~]# vi /var/named/fwd.ostechnix.com
$TTL 86400
@ IN SOA masterdns.ostechnix.com. root.ostechnix.com. (
2011071001 ;Serial
3600 ;Refresh
1800 ;Retry
604800 ;Expire
86400 ;Minimum TTL
)
@IN NS masterdns.ostechnix.com.
@IN NS slavedns.ostechnix.com.masterdns IN A 192.168.1.200
slavedns IN A 192.168.1.201

[B] Create Reverse Zone
Create ‘rev.ostechnix.com’ file in the ‘/var/named’ directory and add the entries for reverse zone as shown below.

[root@masterdns ~]# vi /var/named/rev.ostechnix.com
$TTL 86400
@ IN SOA masterdns.ostechnix.com. root.ostechnix.com. (
2011071001 ;Serial
3600 ;Refresh
1800 ;Retry
604800 ;Expire
86400 ;Minimum TTL
)
@IN NS masterdns.ostechnix.com.
@IN NS slavedns.ostechnix.com.
masterdnsIN A 192.168.1.200
slavedns IN A 192.168.1.201
200 IN PTR masterdns.ostechnix.com.
201 IN PTR slavedns.ostechnix.com.

3. Start the bind service

[root@masterdns ~]# service named start
Generating /etc/rndc.key: [ OK ]
Starting named: [ OK ]
[root@masterdns ~]# chkconfig named on

4. Allow DNS Server through iptables
Add the lines shown in bold letters in ‘/etc/sysconfig/iptables’ file. This will allow all clients to access the DNS server.

[root@masterdns ~]# vi /etc/sysconfig/iptables
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -p udp -m state --state NEW --dport 53 -j ACCEPT
-A INPUT -p tcp -m state --state NEW --dport 53 -j ACCEPT
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

5. Restart iptables to save the changes

[root@masterdns ~]# service iptables restart
iptables: Flushing firewall rules: [ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Unloading modules: [ OK ]
iptables: Applying firewall rules: [ OK ]

6. Test syntax errors of DNS configuration and zone files
[A] Check DNS Config file

[root@masterdns ~]# named-checkconf /etc/named.conf
[root@masterdns ~]# named-checkconf /etc/named.rfc1912.zones

[B] Check zone files

[root@masterdns ~]# named-checkzone ostechnix.com /var/named/fwd.ostechnix.com
zone ostechnix.com/IN: loaded serial 2011071001
OK
[root@masterdns ~]# named-checkzone ostechnix.com /var/named/rev.ostechnix.com
zone ostechnix.com/IN: loaded serial 2011071001
OK
[root@masterdns ~]#

7. Test DNS Server
Method A:

[root@masterdns ~]# dig masterdns.ostechnix.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 <<>> masterdns.ostechnix.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11496 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1 ;; QUESTION SECTION: ;masterdns.ostechnix.com.INA ;; ANSWER SECTION: masterdns.ostechnix.com. 86400INA192.168.1.200 ;; AUTHORITY SECTION: ostechnix.com.86400INNSmasterdns.ostechnix.com. ostechnix.com.86400INNSslavedns.ostechnix.com. ;; ADDITIONAL SECTION: slavedns.ostechnix.com.86400INA192.168.1.201 ;; Query time: 5 msec ;; SERVER: 192.168.1.200#53(192.168.1.200) ;; WHEN: Sun Mar 3 12:48:35 2013 ;; MSG SIZE rcvd: 110 Method B: [root@masterdns ~]# dig -x 192.168.1.200 ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 <<>> -x 192.168.1.200
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40891 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;200.1.168.192.in-addr.arpa.INPTR ;; ANSWER SECTION: 200.1.168.192.in-addr.arpa. 86400 INPTRmasterdns.ostechnix.com. ;; AUTHORITY SECTION: 1.168.192.in-addr.arpa.86400INNSmasterdns.ostechnix.com. 1.168.192.in-addr.arpa.86400INNSslavedns.ostechnix.com. ;; ADDITIONAL SECTION: masterdns.ostechnix.com. 86400INA192.168.1.200 slavedns.ostechnix.com.86400INA192.168.1.201 ;; Query time: 6 msec ;; SERVER: 192.168.1.200#53(192.168.1.200) ;; WHEN: Sun Mar 3 12:49:53 2013 ;; MSG SIZE rcvd: 150 Method C: [root@masterdns ~]# nslookup masterdns Server:192.168.1.200 Address:192.168.1.200#53 Name:masterdns.ostechnix.com Address: 192.168.1.200 Thats it. Now the Primary DNS server is ready Setup Secondary(Slave) DNS Server [root@slavedns ~]# yum install bind* -y 1. Configure Slave DNS Server Open the main configuration file ‘/etc/named.conf’ and add the lines as shown in bold letters. [root@slavedns ~]# vi /etc/named.conf // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { 127.0.0.1; 192.168.1.201; }; ## Slve DNS IP ## listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { localhost; 192.168.1.0/24; }; ## IP Range ## recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; zone"ostechnix.com" IN { type slave; file "slaves/ostechnix.fwd"; masters { 192.168.1.200; }; }; zone"1.168.192.in-addr.arpa" IN { type slave; file "slaves/ostechnix.rev"; masters { 192.168.1.200; }; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; 2. Start the DNS Service [root@slavedns ~]# service named start Generating /etc/rndc.key: [ OK ] Starting named: [ OK ] [root@slavedns ~]# chkconfig named on Now the forward and reverse zones are automatically replicated from Master DNS server to Slave DNS server. To verify, goto DNS database location(i.e ‘/var/named/slaves’) and use command ‘ls’. [root@slavedns ~]# cd /var/named/slaves/ [root@slavedns slaves]# ls ostechnix.fwd ostechnix.rev The forward and reverse zones are automatically replicated from Master DNS. Now check the zone files whether the correct zone files are replicated or not. [A] Check Forward zone: [root@slavedns slaves]# cat ostechnix.fwd $ORIGIN . $TTL 86400; 1 day ostechnix.comIN SOAmasterdns.ostechnix.com. root.ostechnix.com. ( 2011071001 ; serial 3600 ; refresh (1 hour) 1800 ; retry (30 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NSmasterdns.ostechnix.com. NSslavedns.ostechnix.com. $ORIGIN ostechnix.com. masterdnsA192.168.1.200 slavedns A192.168.1.201 [B] Check Reverse zone: [root@slavedns slaves]# cat ostechnix.rev $ORIGIN . $TTL 86400; 1 day 1.168.192.in-addr.arpaIN SOAmasterdns.ostechnix.com. root.ostechnix.com. ( 2011071001 ; serial 3600 ; refresh (1 hour) 1800 ; retry (30 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NSmasterdns.ostechnix.com. NSslavedns.ostechnix.com. $ORIGIN 1.168.192.in-addr.arpa. 200PTRmasterdns.ostechnix.com. 201PTRslavedns.ostechnix.com. masterdnsA192.168.1.200 slavedns A192.168.1.201 3. Add the DNS Server details to all systems [root@slavedns ~]# vi /etc/resolv.conf # Generated by NetworkManager search ostechnix.com nameserver 192.168.1.200 nameserver 192.168.1.201 nameserver 8.8.8.8 4. Test DNS Server Method A: [root@slavedns ~]# dig slavedns.ostechnix.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 <<>> slavedns.ostechnix.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39096 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1 ;; QUESTION SECTION: ;slavedns.ostechnix.com.INA ;; ANSWER SECTION: slavedns.ostechnix.com.86400INA192.168.1.201 ;; AUTHORITY SECTION: ostechnix.com.86400INNSmasterdns.ostechnix.com. ostechnix.com.86400INNSslavedns.ostechnix.com. ;; ADDITIONAL SECTION: masterdns.ostechnix.com. 86400INA192.168.1.200 ;; Query time: 7 msec ;; SERVER: 192.168.1.200#53(192.168.1.200) ;; WHEN: Sun Mar 3 13:00:17 2013 ;; MSG SIZE rcvd: 110 Method B: [root@slavedns ~]# dig masterdns.ostechnix.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 <<>> masterdns.ostechnix.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12825 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1 ;; QUESTION SECTION: ;masterdns.ostechnix.com.INA ;; ANSWER SECTION: masterdns.ostechnix.com. 86400INA192.168.1.200 ;; AUTHORITY SECTION: ostechnix.com.86400INNSmasterdns.ostechnix.com. ostechnix.com.86400INNSslavedns.ostechnix.com. ;; ADDITIONAL SECTION: slavedns.ostechnix.com.86400INA192.168.1.201 ;; Query time: 13 msec ;; SERVER: 192.168.1.200#53(192.168.1.200) ;; WHEN: Sun Mar 3 13:01:02 2013 ;; MSG SIZE rcvd: 110 Method C: [root@slavedns ~]# nslookup slavedns Server:192.168.1.200 Address:192.168.1.200#53 Name:slavedns.ostechnix.com Address: 192.168.1.201 Method D: [root@slavedns ~]# nslookup masterdns Server:192.168.1.200 Address:192.168.1.200#53 Name:masterdns.ostechnix.com Address: 192.168.1.200 Method E: [root@slavedns ~]# dig -x 192.168.1.201 ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 <<>> -x 192.168.1.201
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56991 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;201.1.168.192.in-addr.arpa.INPTR ;; ANSWER SECTION: 201.1.168.192.in-addr.arpa. 86400 INPTRslavedns.ostechnix.com. ;; AUTHORITY SECTION: 1.168.192.in-addr.arpa.86400INNSmasterdns.ostechnix.com. 1.168.192.in-addr.arpa.86400INNSslavedns.ostechnix.com. ;; ADDITIONAL SECTION: masterdns.ostechnix.com. 86400INA192.168.1.200 slavedns.ostechnix.com.86400INA192.168.1.201 ;; Query time: 6 msec ;; SERVER: 192.168.1.200#53(192.168.1.200) ;; WHEN: Sun Mar 3 13:03:39 2013 ;; MSG SIZE rcvd: 150 Method F: [root@slavedns ~]# dig -x 192.168.1.200 ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 <<>> -x 192.168.1.200
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42968 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;200.1.168.192.in-addr.arpa.INPTR ;; ANSWER SECTION: 200.1.168.192.in-addr.arpa. 86400 INPTRmasterdns.ostechnix.com. ;; AUTHORITY SECTION: 1.168.192.in-addr.arpa.86400INNSslavedns.ostechnix.com. 1.168.192.in-addr.arpa.86400INNSmasterdns.ostechnix.com. ;; ADDITIONAL SECTION: masterdns.ostechnix.com. 86400INA192.168.1.200 slavedns.ostechnix.com.86400INA192.168.1.201 ;; Query time: 4 msec ;; SERVER: 192.168.1.200#53(192.168.1.200) ;; WHEN: Sun Mar 3 13:04:15 2013 ;; MSG SIZE rcvd: 150

Tuesday, April 16, 2013

Setup Swap file on Linux

Create Swap File on Linux

Step 1
Create Storage File
Type the following command to create 512MB swap file (1024 * 512MB = 524288 block size):
# dd if=/dev/zero of=/swapfile1 bs=1024 count=524288

Where,

    if=/dev/zero : Read from /dev/zero file. /dev/zero is a special file in that provides as many null characters to build storage file called /swapfile1.
    of=/swapfile1 : Read from /dev/zero write stoage file to /swapfile1.
    bs=1024 : Read and write 1024 BYTES bytes at a time.
    count=524288 : Copy only 523288 BLOCKS input blocks.

Step 2
 Set Up a Linux Swap Area
# mkswap /swapfile1
# chown root:root /swapfile1
# chmod 0600 /swapfile1
# swapon /swapfile1

Step 3
Activate Swap in fstab

 vi /etc/fstab

/swapfile1 swap swap defaults 0 0

Step 4

 Boot the system


$ free -m