Tuesday, 26 January 2016

expr: Computation and String Handling




The syntax of expr command is
expr [expression]
Note: You have to provide the space between the values and the operands. Otherwise the expr command may throw error or print them as a string.

Arithmetic Operator Examples:
Example:
1. Sum of numbers
$ expr 5 + 3
8
2. Difference between two numbers
$ expr 10 - 6
4
3. Multiplying numbers
$ expr 7 \* 9
63
Here the * is shell builtin operator, that is why it needs to escaped with backslash.
4. Dividing numbers
$ expr 6 / 4
1
The division operator returns only the arithmetic quotient.
5. Remainder or modulus
$ expr 6 % 4
2

Comparision or Relational Operator Examples:
You can use the following comparision operators with the expr command:
Val1 < Val2 : Returns 1 if val1 is less than val2. otherwise zero.
Val1 <= Val2 : Returns 1 if val1 is less than or equal to val2. otherwise zero.
Val1 > Val2 : Returns 1 if val1 is greater than val2. otherwise zero.
Val1 >= Val2 : Returns 1 if val1 is greater than or equal to val2. otherwise zero.
Val1 = Val2 : Returns 1 if val1 is equal to val2. otherwise zero.
Val1 != Val2 : Returns 1 if val1 is equal to val2. otherwise zero.
val1 | val2 : Returns val1 if val1 is neither null nor zero. Otherwise val2.
val1 & val2 : Returns val1 if both val1 and val2 is neither null nor zero. Otherwise 0.
Note: You have to escape most of the operators with backslash as they are shell built in.
$ expr 1 \< 2
1
$ expr 1 \<= 1
1
$ expr 2 \> 5
0
$ expr 2 \>= 5
0
$ expr 7 = 7
1
$ expr 9 != 18
1
$ expr 2 \| 5
2
$ expr 0 \| 5
5
$ expr 2 \& 5
2
$ expr 6 \& 3
6
$ expr 6 \& 0
0
$ expr 0 \& 3
0
String Function Examples:

1. Length of string
The length function is used to find the number of characters in a string.
$ expr length linux
5
$expr length linux\ system
12
$expr length "linux system"
If you have spaces in your string escape them with backslash or quote them with double quotes.
2. Find Substring

You can extract a portion of the string by using the substr function. The syntax of substr function is

substr string position length

Here position is the character position in the string. length is the number of chracters to extract from the main string. An example is shown below:

$ expr substr unixserver 5 6
server

3. Index of the substring

You can find the position of a string in the main string using the index function. The syntax of index function is shown below:

index string chars

If the chars string is found in the main string, then the index function returns the position of the chars. Otherwise it returns 0. See the following examples:

$ expr index linux nux
3

$expr index linux win
0

4. Matching a regexp

The match function is used to find anchored pattern match of regexp in the string. The syntax of match function is shown below:

match string pattern

The match function returns the number of characters in the pattern is a match is found. Otherwise, it returns 0. Alternative synatx is

string : pattern

The following examples shows how to use the match function:

$ expr match linuxserver lin
3

$ expr match linuxserver server
0

Here in the second expr, the pattern (server) exists in the main string. However the pattern does not start from the beggining of the main string. Thats why the match function returns 0.

Numeric , String and File Comparators

test:

When you use if to evaluate epressions, you need the test statement because the true or flase vlaues returned by expressions can't directly handled by if.
test works in three ways
1) Compares two numbers
2)Compares two strings or single one for null values
3)checks a file attributes.

Numeric Comparison:

Operator    Meaning
-eq             Equal
-gt              Greater than
-lt              Less than
-ne            Not equal
-ge           Greater than Equal
-le             Less Than equal


x= 5 ; y =7

test $x -eq $y ; echo $?
1                                       not equal

String Comparators:

s1 = s2     string s1 = s2
s1 ! = s2   string s1 ! = s2
-n stg        string is not null
-z stg        string is  null

File Comparators:

There are following operators to test various properties associated with a Unix file.

Assume a variable file holds an existing file name "test" whose size is 100 bytes and has read, write and execute permission on −

Show Examples

Operator Description Example
-b file Checks if file is a block special file if yes then condition becomes true. [ -b $file ] is false.
-c file Checks if file is a character special file if yes then condition becomes true. [ -c $file ] is false.
-d file Check if file is a directory if yes then condition becomes true. [ -d $file ] is not true.
-f file Check if file is an ordinary file as opposed to a directory or special file if yes then condition becomes true. [ -f $file ] is true.
-g file Checks if file has its set group ID (SGID) bit set if yes then condition becomes true. [ -g $file ] is false.
-k file Checks if file has its sticky bit set if yes then condition becomes true. [ -k $file ] is false.
-p file Checks if file is a named pipe if yes then condition becomes true. [ -p $file ] is false.
-t file Checks if file descriptor is open and associated with a terminal if yes then condition becomes true. [ -t $file ] is false.
-u file Checks if file has its set user id (SUID) bit set if yes then condition becomes true. [ -u $file ] is false.
-r file Checks if file is readable if yes then condition becomes true. [ -r $file ] is true.
-w file Check if file is writable if yes then condition becomes true. [ -w $file ] is true.
-x file Check if file is execute if yes then condition becomes true. [ -x $file ] is true.
-s file Check if file has size greater than 0 if yes then condition becomes true. [ -s $file ] is true.

-e file Check if file exists. Is true even if file is a directory but exists. [ -e $file ] is true.





SHELL Scripts


When a group of commands have to be executed regularly, they should be stored in a file and the file itself executed as shell script or shell program.Though its not mandatory, we normally use the .sh extension for shell scripts. this makes to match them with wild cards.Shell scripts are eecuted in a separate child shell process.

Read : Making  script interactive

A common use for user-created variables is storing information that a user enters in response to a prompt. Using read , scripts can accept input from the user and store that input in variables. The
read builtin reads one line from standard input and assigns the words on the line to one or more variable.

Example:

$ cat file1
echo -n "Enter Name: "
read name
echo "Your Name is $name"

$ ./read1
Enter Name: Sample
Your Name is Sample

Multiple inputs read:

$ cat read3
read -p "Enter something: " word1 word2 word3
echo "Word 1 is: $word1"
echo "Word 2 is: $word2"
echo "Word 3 is: $word3"
$ ./read3
Enter something: this is something
Word 1 is: this
Word 2 is: is
Word 3 is: something

Command line arguments:

UNIX commands, shell scripts also accept arguments from the command line.
They can, therefore, run non interactively and be used with redirection and
pipelines.

Positional Parameters:

Arguments are passed from the command line into a shell program using the
positional parameters $1 through to $9. Each parameter corresponds to the
position of the argument on the command line.

The first argument is read by the shell into the parameter $1, The second
argument into $2, and so on. After $9, the arguments must be enclosed in
brackets, for example, ${10}, ${11}, ${12}.Some shells doesn't support this
method. In that case, to refer to parameters with numbers greater than 9, use
the shift command; this shifts the parameter list to the left. $1 is lost,while
$2 becomes $1, $3 becomes $2, and so on. The inaccessible tenth parameter
becomes $9 and can then be referred to.

Example:

#!/bin/bash
# Call this script with at least 3 parameters, for example
# sh scriptname 1 2 3

echo "first parameter is $1"

echo "Second parameter is $2"

echo "Third parameter is $3"

exit 0

Output:
[root@localhost ~]# sh parameters.sh 47 9 34

first parameter is 47

Second parameter is 9

Third parameter is 34

[root@localhost ~]# sh parameters.sh 4 8 3

first parameter is 4

Second parameter is 8

Third parameter is 3


 In addition to these positional parameters, there are a few other special
parameters used by shell.Their significance is noted bellow.

$* - It stores the complete set of positional parameters as a single string.
$@ - Same as $*, But there is subtle difference when enclosed in double quotes.
$# - It is set to the number of arguments specified.This lets you design scripts
     that check whether the right number of arguments have been entered.
$0 - Refers to the name of the script itself.

$?- Eixt status of last command
$$- PID of Current Shell
$!- PID of last background job.



Exit and Exit status of the command

exit 0   used when everything is went fine.
exit 1   used when something went wrong

test 100 -gt 99 ; echo $?

0                --------- result is success

test 98 -gt 99  ; echo $?

1                    ----- -result is failed

The LOGICAL OPERATORS && AND || -- CONDITIONAL EXECUTION

cmd1 && cmd2   (command2 is executed when command1 is success)
cmd1  || cmd2   (command2 is executed when command1 is failed.)

Example:
ls sample.txt && echo "sample.txt file  found"
ls sample.txt || echo "sample not found"






Friday, 22 January 2016

Simple Filters


Head : Displaying the beginning of a file

Head prints the first N number of data of the given input. By default, it prints first 10 lines of each given file.

Syntax and Options

head [OPTIONS]… [FILE]…

Short Option Long Option Option Description
-c –bytes to print N bytes from each input file.
-n –lines to print N lines from each input file.
-q –silent, –quiet Prevent printing of header information that contains file name

-v –verbose to print header information always.


1. Print the first N number of lines

To view the first N number of lines, pass the file name as an argument with -n option as shown below.

$ head -n 5 flavours.txt
Ubuntu
Debian
Redhat
Gentoo
Fedora core


Note: When you simply pass the file name as an argument to head, it prints out the first 10 lines of the file.

2. Print N number of lines by specifying N with –

You don’t even need to pass the -n option as an argument, simply specify the N number of lines followed by ‘-‘ as shown below.

$ head -4 flavours.txt
Ubuntu
Debian
Redhat
Gentoo

3. Print all but not the last N lines

By placing ‘-‘ in front of the number with -n option, it prints all the lines of each file but not the last N lines as shown below,

$ head -n -5 flavours.txt
Ubuntu

4. Print the N number of bytes

You can use the -c option to print the N number of bytes from the initial part of file.

$ head -c 5 flavours.txt
Ubuntu
Note : As like -n option, here also you can pass ‘-‘ in front of number to print all bytes but not the last N bytes.

5. Passing Output of Other command to Head Input

You may pass the output of other commands to the head command via pipe as shown below,

$ ls | head
bin
boot
cdrom
dev
etc
home
initrd.img
lib
lost+found
media

TAIL: 
tail outputs the last part, or "tail", of files.

Syntax

tail [OPTION]... [FILE]...


Description

tail prints the last 10 lines of each FILE to standard output. With more than one FILE, it precedes each set of output with a header giving the file name. If no FILE is specified, or if FILE is specified as a dash ("-"), tail reads from standard input.
Options

In the options listed below, arguments that are mandatory for long options are mandatory for short options as well:
-c, --bytes=K Output the last K bytes; alternatively, use "-c +K" to output bytes starting with the Kth byte of each file.
-f, --follow[={name|descriptor}] Output appended data as the file grows; -f, --follow, and --follow=descriptor are equivalent. If name is specified, the file with filename name will be followed, regardless of its file descriptor.
-F Same as "--follow=name --retry".
-n, --lines=K Output the last K lines, instead of the default of the last 10; alternatively, use "-n +K" to output lines starting with the Kth.
--max-unchanged-stats=N With --follow=name, reopen a FILE which has not changed size after N (default 5) iterations to see if it has been unlink'ed or renamed (this is the usual case of rotated log files).
--pid=PID With -f, terminate operation after process ID PID dies.
-q, --quiet, --silent Never output headers giving file names.
--retry Keep trying to open a file even when it is, or becomes, inaccessible; useful when following by name, i.e., with --follow=name.
-s, --sleep-interval=N With -f, sleep for approximately N seconds (default 1.0) between iterations. With --pid=P, check process P at least once every N seconds.
-v, --verbose Always output headers giving file names.
--help Display a help message, and exit.
--version Display version information, and exit.
Notes

If the first character of K (the number of bytes or lines) is a "+", tail prints the beginning with the Kth item from the start of each file; otherwise, tail prints the last K items in the file. K may have a multiplier suffix: b (512), kB (1000), K (1024), MB (1000*1000), M (1024*1024), GB (1000*1000*1000), G (1024*1024*1024), and so on for T (terabyte), P (petabyte), E (exabyte), Z (zettabyte), Y (yottabyte).

With --follow (-f), tail defaults to following the file descriptor, which means that even if a tail'ed file is renamed, tail will continue to track its end. This default behavior is not desirable when you really want to track the actual name of the file, not the file descriptor (for example, in a log rotation). Use --follow=name in that case. That causes tail to track the named file in a way that accommodates renaming, removal and creation.


Examples

tail myfile.txt

Outputs the last 10 lines of the file myfile.txt.

tail myfile.txt -n 100

Outputs the last 100 lines of the file myfile.txt.

tail -f myfile.txt


Outputs the last 10 lines of myfile.txt, and monitors myfile.txt for updates; tail then continues to output any new lines that are added to myfile.txt.


CUT: Extract Fields and Columns from a file

Unix Cut Command Example

We will see the usage of cut command by considering the below text file as an example

> cat file.txt
unix or linux os
is unix good os
is linux good os

1. Write a unix/linux cut command to print characters by position?

The cut command can be used to print characters in a line by specifying the position of the characters. To print the characters in a line, use the -c option in cut command

cut -c4 file.txt
x
u
l

The above cut command prints the fourth character in each line of the file. You can print more than one character at a time by specifying the character positions in a comma separated list as shown in the below example

cut -c4,6 file.txt
xo
ui
ln

This command prints the fourth and sixth character in each line.

2.Write a unix/linux cut command to print characters by range?

You can print a range of characters in a line by specifying the start and end position of the characters.

cut -c4-7 file.txt
x or
unix
linu

The above cut command prints the characters from fourth position to the seventh position in each line. To print the first six characters in a line, omit the start position and specify only the end position.

cut -c-6 file.txt
unix o
is uni
is lin

To print the characters from tenth position to the end, specify only the start position and omit the end position.

cut -c10- file.txt
inux os
ood os
good os

If you omit the start and end positions, then the cut command prints the entire line.

cut -c- file.txt

3.Write a unix/linux cut command to print the fields using the delimiter?

You can use the cut command just as awk command to extract the fields in a file using a delimiter. The -d option in cut command can be used to specify the delimiter and -f option is used to specify the field position.

cut -d' ' -f2 file.txt
or
unix
linux

This command prints the second field in each line by treating the space as delimiter. You can print more than one field by specifying the position of the fields in a comma delimited list.

cut -d' ' -f2,3 file.txt
or linux
unix good
linux good

The above command prints the second and third field in each line.

Note: If the delimiter you specified is not exists in the line, then the cut command prints the entire line. To suppress these lines use the -s option in cut command.

4. Write a unix/linux cut command to display range of fields?

You can print a range of fields by specifying the start and end position.

cut -d' ' -f1-3 file.txt

The above command prints the first, second and third fields. To print the first three fields, you can ignore the start position and specify only the end position.

cut -d' ' -f-3 file.txt

To print the fields from second fields to last field, you can omit the last field position.

cut -d' ' -f2- file.txt

5. Write a unix/linux cut command to display the first field from /etc/passwd file?

The /etc/passwd is a delimited file and the delimiter is a colon (:). The cut command to display the first field in /etc/passwd file is

cut -d':' -f1 /etc/passwd

6. The input file contains the below text

> cat filenames.txt
logfile.dat
sum.pl
add_int.sh

Using the cut command extract the portion after the dot.

First reverse the text in each line and then apply the command on it.


rev filenames.txt | cut -d'.' -f1

Paste
The paste command displays the corresponding lines of multiple files side-by-side.
Syntax

paste [OPTION]... [FILE]...
Description

paste writes lines consisting of the sequentially corresponding lines from each FILE, separated by tabs, to the standard output. With no FILE, or when FILE is a dash ("-"), paste reads from standard input.
Options

-d, --delimiters=LIST reuse characters from LIST instead of tabs.
-s, --serial paste one file at a time instead of in parallel.
--help Display a help message, and exit.
--version Display version information, and exit.
Examples


paste file1.txt file2.txt


Sort:

Sort command is helpful to sort/order lines in text files. You can sort the data in text file and display the output on the screen, or redirect it to a file. Based on your requirement, sort provides several command line options for sorting data in a text file.

Sort Command Syntax:

$ sort [-options]
For example, here is a test file:

$ cat test
zzz
sss
qqq
aaa
BBB
ddd
AAA
And, here is what you get when sort command is executed on this file without any option. It sorts lines in test file and displays sorted output.

$ sort test
aaa
AAA
BBB
ddd
qqq
sss
zzz
1. Perform Numeric Sort using -n option

If we want to sort on numeric value, then we can use -n or –numeric-sort option.

Create the following test file for this example:

$ cat test
22 zzz
33 sss
11 qqq
77 aaa
55 BBB
The following sort command sorts lines in test file on numeric value in first word of line and displays sorted output.

$ sort -n test
11 qqq
22 zzz
33 sss
55 BBB
77 aaa
2. Sort Human Readable Numbers using -h option

If we want to sort on human readable numbers (e.g., 2K 1M 1G), then we can use -h or –human-numeric-sort option.



Create the following test file for this example:

$ cat test
2K
2G
1K
6T
1T
1G
2M
The following sort command sorts human readable numbers (i.e 1K = 1 Thousand, 1M = 1 Million, 1G = 1 Giga, 1T = 1 Tera) in test file and displays sorted output.

$ sort -h test
1K
2K
2M
1G
2G
1T
6T
3. Sort Months of an Year using -M option

If we want to sort in the order of months of year, then we can use -M or –month-sort option.

Create the following test file for this example:

$ cat test
sept
aug
jan
oct
apr
feb
mar11
The following sort command sorts lines in test file as per month order. Note, lines in file should contain at least 3 character name of month name at start of line (e.g. jan, feb, mar). If we will give, ja for January or au for August, then sort command would not consider it as month name.

$ sort -M test
jan
feb
mar11
apr
aug
sept
oct
4. Check if Content is Already Sorted using -c option

If we want to check data in text file is sorted or not, then we can use -c or –check, –check=diagnose-first option.

Create the following test file for this example:

$ cat test
2
5
1
6
The following sort command checks whether text file data is sorted or not. If it is not, then it shows first occurrence with line number and disordered value.

$ sort -c test
sort: test:3: disorder: 1
5. Reverse the Output and Check for Uniqueness using -r and -u options

If we want to get sorted output in reverse order, then we can use -r or –reverse option. If file contains duplicate lines, then to get unique lines in sorted output, “-u” option can be used.

Create the following test file for this example:

$ cat test
5
2
2
1
4
4
The following sort command sorts lines in test file in reverse order and displays sorted output.

$ sort -r test
5
4
4
2
2
1
The following sort command sorts lines in test file in reverse order and removes duplicate lines from sorted output.

$ sort -r -u test
5
4
2
1
6. Selectively Sort the Content, Customize delimiter, Write output to a file using  -k, -t, -o options

If we want to sort on the column or word position in lines of text file, then “-k” option can be used. If we each word in each line of file is separated by delimiter except ‘space’, then we can specify delimiter using “-t” option. We can get sorted output in any specified output file (using “-o” option) instead of displaying output on standard output.

Create the following test file for this example:

$ cat test
aa aa zz
aa aa ff
aa aa tt
aa aa kk
The following sort command sorts lines in test file on the 3rd word of each line and displays sorted output.

$ sort -k3 test
aa aa ff
aa aa kk
aa aa tt
aa aa zz
$ cat test
aa|5a|zz
aa|2a|ff
aa|1a|tt
aa|3a|kk
Here, several options are used altogether. In test file, words in each line are separated by delimiter ‘|’. It sorts lines in test file on the 2nd word of each line on the basis of numeric value and stores sorted output into specified output file.

$ sort -n -t'|' -k2 test -o outfile
The contents of output file are shown below.

$ cat outfile
aa|1a|tt
aa|2a|ff
aa|3a|kk

aa|5a|zz

Uniq
uniq reports or filters out repeated lines in a file.
Syntax

uniq [OPTION]... [INPUT [OUTPUT]]
Description

uniq filters out adjacent, matching lines from input file INPUT, writing the filtered data to output file OUTPUT.

If INPUT is not specified, uniq reads from the standard input.

If OUTPUT is not specified, uniq writes to the standard output.

If no options are specified, matching lines are merged to the first occurrence.
Options

-c, --count Prefix lines with a number representing how many times they occurred.
-d, --repeated Only print duplicated lines.
-D, --all-repeated[=delimit-method] Print all duplicate lines. delimit-method may be one of the following:

none Do not delimit duplicate lines at all. This is the default.
prepend Insert a blank line before each set of duplicated lines.
separate Insert a blank line between each set of dupliated lines.
The -D option is the same as specifying --all-repeated=none.
-f N, --skip-fields=N Avoid comparing the first N fields of a line before determining uniqueness. A field is a group of characters, delimited by whitespace.

This option is useful, for instance, if your document's lines are numbered, and you want to compare everything in the line except the line number. If the option -f 1 were specified, the adjacent lines

1 This is a line.
2 This is a line.
would be considered identical. If no -f option were specified, they would be considered unique.
-i, --ignore-case Normally, comparisons are case-sensitive. This option performs case-insensitive comparisons instead.
-s N, --skip-chars=N Avoid comparing the first N characters of each line when determining uniqueness. This is like the -f option, but it skips individual characters rather than fields.
-u, --unique Only print unique lines.
-z, --zero-terminated End lines with 0 byte (NULL), instead of a newline.
-w, --check-chars=N Compare no more than N characters in lines.
--help Display a help message and exit.
--version Output version information and exit.
Notes

uniq does not detect repeated lines unless they are adjacent. You may want to sort the input first, or use sort -u instead of uniq.
Examples

Let's say we have an eight-line text file, myfile.txt, which contains the following text:
This is a line.
This is a line.
This is a line.

This is also a line.
This is also a line.

This is also also a line.
...Here are several ways to run uniq on this file, and the output it creates:
uniq myfile.txt

This is a line.

This is also a line.

This is also also a line.
uniq -c myfile.txt

      3 This is a line.
      1
      2 This is also a line.
      1
      1 This is also also a line.
uniq -d myfile.txt

This is a line.
This is also a line.
uniq -u myfile.txt


This is also also a line.

Find Command

 The Linux find command is very powerful. It can search the entire filesystem to find files and directories according to the search criteria you specify. Besides using the find command to locate files, you can also use it to execute other Linux commands (grep, mv, rm, etc.) on the files and directories you find, which makes find extremely powerful.

  Syntax: find (starting directory) (matching criteria and actions)

basic 'find file' commands
--------------------------
find / -name foo.txt -type f -print             # full command
find / -name foo.txt -type f                    # -print isn't necessary
find / -name foo.txt                            # don't have to specify "type==file"
find . -name foo.txt                            # search under the current dir
find . -name "foo.*"                            # wildcard
find . -name "*.txt"                            # wildcard
find /users/al -name Cookbook -type d           # search '/users/al'

search multiple dirs
--------------------
find /opt /usr /var -name foo.scala -type f     # search multiple dirs

case-insensitive searching
--------------------------
find . -iname foo                               # find foo, Foo, FOo, FOO, etc.
find . -iname foo -type d                       # same thing, but only dirs
find . -iname foo -type f                       # same thing, but only files

find files with different extensions
------------------------------------
find . -type f \( -name "*.c" -o -name "*.sh" \)                       # *.c and *.sh files
find . -type f \( -name "*cache" -o -name "*xml" -o -name "*html" \)   # three patterns

find files that don't match a pattern (-not)
--------------------------------------------
find . -type f -not -name "*.html"                                # find all files not ending in ".html"

find files by text in the file (find + grep)
--------------------------------------------
find . -type f -name "*.java" -exec grep -l StringBuffer {} \;    # find StringBuffer in all *.java files
find . -type f -name "*.java" -exec grep -il string {} \;         # ignore case with -i option
find . -type f -name "*.gz" -exec zgrep 'GET /foo' {} \;          # search for a string in gzip'd files

5 lines before, 10 lines after grep matches
-------------------------------------------
find . -type f -name "*.scala" -exec grep -B5 -A10 'null' {} \;
  

find files and act on them (find + exec)
----------------------------------------
find /usr/local -name "*.html" -type f -exec chmod 644 {} \;      # change html files to mode 644
find htdocs cgi-bin -name "*.cgi" -type f -exec chmod 755 {} \;   # change cgi files to mode 755
find . -name "*.pl" -exec ls -ld {} \;                            # run ls command on files found

find and copy
-------------
find . -type f -name "*.mp3" -exec cp {} /tmp/MusicFiles \;       # cp *.mp3 files to /tmp/MusicFiles

copy one file to many dirs
--------------------------
find dir1 dir2 dir3 dir4 -type d -exec cp header.shtml {} \;      # copy the file header.shtml to those dirs

find and delete
---------------
find . -type f -name "Foo*" -exec rm {} \;                        # remove all "Foo*" files under current dir
find . -type d -name CVS -exec rm -r {} \;                        # remove all subdirectories named "CVS" under current dir

find files by modification time
-------------------------------
find . -mtime 1               # 24 hours
find . -mtime -7              # last 7 days
find . -mtime -7 -type f      # just files
find . -mtime -7 -type d      # just dirs

find files by modification time using a temp file
-------------------------------------------------
touch 09301330 poop           # 1) create a temp file with a specific timestamp
find . -mnewer poop           # 2) returns a list of new files
rm poop                       # 3) rm the temp file

find with time: this works on mac os x
--------------------------------------
find / -newerct '1 minute ago' -print

find and tar
------------
find . -type f -name "*.java" | xargs tar cvf myfile.tar
find . -type f -name "*.java" | xargs tar rvf myfile.tar

find, tar, and xargs
--------------------
find . -name -type f '*.mp3' -mtime -180 -print0 | xargs -0 tar rvf music.tar
     (-print0 helps handle spaces in filenames)
   

find and pax (instead of xargs and tar)
---------------------------------------
find . -type f -name "*html" | xargs tar cvf jw-htmlfiles.tar -
find . -type f -name "*html" | pax -w -f jw-htmlfiles.tar
   
You have several options for matching criteria:

-atime n File was accessed n days ago
-mtime n File was modified n days ago
-size n File is n blocks big (a block is 512 bytes)
-type c Specifies file type: f=plain text, d=directory
-fstype typ Specifies file system type: 4.2 or nfs
-name nam The filename is nam
-user usr The file's owner is usr
-group grp The file's group owner is grp
-perm p The file's access mode is p (where p is an integer)

You can use + (plus) and - (minus) modifiers with the atime, mtime, and size criteria to increase their usefulness, for example:

-mtime +7 Matches files modified more than seven days ago
-atime -2        Matches files accessed less than two days ago
-size +100 Matches files larger than 100 blocks (50KB)
By default, multiple options are joined by "and". You may specify "or" with the -o flag and the use of grouped parentheses. To match all files modified more than 7 days ago and accessed more than 30 days ago, use:

  \( -mtime +7 -atime +30 \)  To match all files modified more than 7 days ago or accessed more than 30 days ago, use:

  \( -mtime +7 -o -atime +30 \) You may specify "not" with an exclamation point. To match all files ending in .txt except the file notme.txt, use:

  \! -name notme.txt -name \*.txt You can specify the following actions for the list of files that the find command locates:

-print Display pathnames of matching files.
-exec cmd Execute command cmd on a file.
-ok cmd Prompt before executing the command cmd on a file.
-mount (System V) Restrict to file system of starting directory.
-xdev (BSD) Restrict to file system of starting directory.
-prune (BSD) Don't descend into subdirectories.
Executed commands must end with \; (a backslash and semi-colon) and may use {} (curly braces) as a placeholder for each file that the find command locates. For example, for a long listing of each file found, use:

  -exec ls -l {} \; Matching criteria and actions may appear in any order and are evaluated from left to right.

More File Attribuites

FileSytems and Inodes 


Every file is associated with table that contains all that you could possibly need to know about a file -- except its name and contents. This table is called the inode (index node)
and  is accessed by inode number.

1) file type (regular, directory,device,etc)
2) file permissions 
3)number of links
4) UID ownner
5)GID group owner
6)file size in bytes
7) Date and Time of last mmodification
8)date and time of Last access
9)date and time of last changes of the inode mnumber
10)array of pointers that keep tract of all disk blocks used by the file

Q. What are links in Unix?

     Creating links is a kind of shortcuts to access a file. The two different types of links in UNIX are:


Soft Links or Symbolic Links
Hard Links

What is symbolic link or symlink?
Symbolic link, often called symlink or softlink, is very similar to what we know from Windows - a shortcut. They are kind of shortcuts in the Linux/Unix world. Well, symbolic link can exist in the Windows world too, but for the simplicity of our explanation, let's just work with the comparison that symlink is kind of a shortcut for now. We will get into more details later. Symbolic link contains information about the destination of the target file.

What is hard link?
Hard link (often also called hardlink) is a bit different object when compared to a symlink. Hard link is a directory reference or pointer to a file. Hardlink is a label stored in a directory structure that refers the operating system to the file data when it is accessed. The important part is that hard link is closely tied together with its originating file. If you make changes to a hard link, you automatically make changes to the underlying file that the hardlink is attached to.

Hard link can only refer to data that exists on the same file system.

Many of us are used to Windows where files live in folders. Files in Linux/Unix are not stored in directories. Files in Linux are assigned an inode number which Linux uses to locate files. Each file can have multiple hard links which are located in various directories. A file does not get deleted until there are no remaining hard links to it.

Differences between symbolic link and hard link

Let's summarize our findings. The list bellow summarizes some differences between symlink and hard link:

Hardlink or hardlinks cannot be created for directories (folders). Hard link can only be created for a file.
Symbolic links or symlinks can link to a directory (folder).
Removing the original file that your hard link points to does not remove the hardlink itself; the hardlink still provides the content of the underlying file.
If you remove the hard link or the symlink itself, the original file will stay intact.

Removing the original file does not remove the attached symbolic link or symlink, but without the original file, the symlink is useless (the same 


Hard link limitations
1) You cant have two linked filenames in two filesystems. In other words, you can'tlink a filenamme in the /usr file system to another in the /home file system.
2) you can't link a directory even within same file system.


soft links

ln -s emp.list employees  

Hard Links


ln emp.list employees   employess doesnt exist

Standard Input and Output Redirection



Redirection is one of Unix's strongest points. 
Whenever you run a program you get some output at the shell prompt. In case you don't want that output to appear in the shell window, you can redirect it elsewhere. you can make the output go into a file...or maybe go directly to the printer.

This is known as Redirection. Not only can the output of programs be redirected, you can also redirect the input for programs.

the program has 3 important files to work with. They are standard input, standard output, and standard error. These are 3 files that are always open when a program runs. 


File Descriptor        Descriptor Points to -
0                                    Standard Input (Generally Keyboard)
1                                     Standard output (Generally Display/Screen)
2                                     Standard Error Ouput (Generally Display/Screen)

Output Redirection 

The most common use of Redirection is to redirect the output (that normally goes to the terminal) from a command to a file instead. This is known as Output Redirection. This is generally used when you get a lot of output when you execute your program. Often you see that screens scroll past very rapidly. You could get all the output in a file and then even transfer that file elsewhere or mail it to someone.

The way to redirect the output is by using the ' > ' operator in shell command you enter. This is shown below. The ' > ' symbol is known as the output redirection operator. Any command that outputs its results to the screen can have its output sent to a file.

$ ls > listing

The ' ls ' command would normally give you a directory listing. Since you have the ' > ' operator after the ' ls ' command, redirection would take place. What follows the ' > ' tells Unix where to redirect the output. In our case it would create a file named ' listing ' and write the directory listing in that file. You could view this file using any text editor or by using the cat command.

Note: If the file mentioned already exists, it is overwritten. So care should be taken to enter a proper name. In case you want to append to an existing file, then instead of the ' > ' operator you should use the ' >> ' operator. This would append to the file if it already exists, else it would create a new file by that name and then add the output to that newly created file.



Input Redirection

Input Redirection is not as popular as Output Redirection. Since most of the times you would expect the input to be typed at the keyboard. But when it is used effectively, Input Redirection can be of great use. The general use of Input Redirection is when you have some kind of file, which you have ready and now you would like to use some command on that file.

You can use Input Redirection by typing the ' < ' operator. An excellent example of Input Redirection has been shown below.

$ mail cousin < my_typed_letter

The above command would start the mail program with contents of the file named ' my_typed_letter ' as the input since the Input Redirection operator was used.

Note: You can't have Input Redirection with any program/command. Only those commands that accept input from keyboard could be redirected to use some kind of text files as their input. Similarly Output Redirection is also useful only when the program sends its output to the terminal. In case you are redirecting the output of a program that runs under X, it would be of no use to you.



Error Redirection

This is a very popular feature that many Unix users are happy to learn. In case you have worked with Unix for some time, you must have realised that for a lot of commands you type you get a lot of error messages. And you are not really bothered about those error messages. For example whenever I perform a search for a file, I always get a lot of permission denied error messages. There may be ways to fix those things. But the simplest way is to redirect the error messages elsewhere so that it doesn't bother me. In my case I know that errors I get while searching for files would be of no use to me.

Here is a way to redirect the error messages

$ myprogram 2>errorsfile

This above command would execute a program named ' myprogram ' and whatever errors are generated while executing that program would all be added to a file named ' errorsfile ' rather than be displayed on the screen. Remember that 2 is the error output file descriptor. Thus ' 2> ' means redirect the error output.

$ myprogram 2>>all_errors_till_now

The above command would be useful in case you have been saving all the error messages for some later use. This time the error messages would append to the file rather than create a new file.


You might realize that in the above case since I wasn't interested in the error messages generated by the program I redirected the output to a file. But since those error messages don't interest me I would have to go and delete that file created every time I run that command. Else I would have several such files created all over whenever I redirect my unwanted error output. An excellent way around is shown below