Linux command (renewing)

Linux command (renewing)

  1. Crtl + C: stop
  2. viopen or create a file
  3. ~ root directory
  4. . hiden file
  5. source execute file
  6. clear clean the window
  7. chsh change the user’s login shell
    eg. chsh -s /bin/bash -s: Used to set the shell as your login shell
  8. alias use a shorter command to replace the original command
    e.g. alias gfortran=“gfortran-mp-12”
  9. # means this line is text instead of command
  10. cd change directory
  11. ls list files
  12. cd ../ return to upper level
  13. cd path change directory to path
    e.g.
cd /home/phoenixnap/Desktop
# return to the last path
cd -
  1. pwd print working directory
  2. mkdir make new directory
mkdir -p folder_07/sub_folder_01

with -p option, can also be used to create a directory/folder structure (or tree).
16. touch make new txt

touch file_02.txt file_03.txt
touch file_{04,05,06}.txt #create several files at one time
touch -t 201701010659 file_00.txt#set time for the file
  1. ncdump -h filename check the fead file of filename in the form of netcdf
  2. grep searching and matching text files
grep -n "Hello" HelloWorld1.c
# The -n option will display the line numbers where the string appears.
grep -w "document" SciComp.tex
# The -w option will search for the exact word.
grep -iwn "document" SciComp.tex
# -i means a case-insensitive search
grep -v "Hello" HelloWorld1.c
# The -v option returns all lines that do not contain the string. 
# Print a certain number of lines before and after the match
grep -B5 "evensidemargin" SciComp.tex
grep -A2 "documentclass" SciComp.tex
# Search for lines that begin or end with a certain string
# grep -E is equivalent to egrep
# ^ represents beginning of a line; $ represents the end of a line
grep -E "^01" ListOfExpts_Reference.dat
egrep "^01" ListOfExpts_Reference.dat

grep -E "06$" ListOfExpts_Completed.dat
egrep "06$" ListOfExpts_Completed.dat
# Search for lines that contain one of the two strings

grep -E "(algorithm|information)" SciComp.tex
egrep "(algorithm|information)" SciComp.tex
  1. ls -l or ll in my system, list files and directories with various additional information.
  2. rm -i or rm in my system, remove files or directories with confirmation
  3. ncdump -v variable filename find variable in filename
  4. ncdump -v variable filename1 > filename2 save variable in filename1 into filename2
  5. ncdump -h filename |grep -i variable search variable in the head file of filename, -i means ignore case, the vertical line means use the output of the command before the line as the input for the command behind the vertical line
  6. ncks get some parts of the data
    e.g.
    ncks -F -d time,1 input.nc output.nc
    “-d time” means get from time dimension, “-F” means using the fortran counter, which starts from 1, not 0, thus, “1” means get the first time. The whole command means get part of data from input.nc and save into output.nc
    e.g.
ncks -F -d time,1,10 input.nc output.nc
ncks -F -d time,1,90,5 input.nc output.nc
ncks -v variable -F -d node,1 input.nc output.nc

This means only output variable and relevant data on node 1.

ncks -C -v variable -F -d node,1 input.nc output.nc

This means only output variable on node 1.

ncks -C -v variable -F -d node,1 -d node,5 -d note,33 input.nc output.nc

This is used for take 3 unrelevant node: 1, 5, 33
25. ncrcat input1 input2 output netCDF Record Concatenator, combine input1 and input2, and save as output
26. ncdump -h output.nc |head -20 only show the first 20 lines of the head file of output.nc
27. chgrp change group
28. cp copy

cp -i file1.txt file2.txt

copy file1.txt and save as file2.txt, “-i” means ask if execute the copy command.
29. df disk free, check the disk storage
30. du disk usage
31. find search file or directory

find ./path -name test.pdf
#find by name
  1. The path is said to be relative if it begins with no leading /
  2. echo
  3. truncate -s can shrink or extend the size of a file to the specified size.
truncate -s 0 file_07.txt
truncate -s 5K file_07.txt
  1. man command help/info command
  2. find LOCATION OPTIONS [OPTIONAL TASK ON THE RESULT]
find . -type d
find . -type f
find . -type l
find . -empty #. means current location
find . -size +2k #Find all entities in the current location that are at least 2 KB in size
find . -mtime +3 # Find all entities in the current location that have been modified in the last 3 days
find . -maxdepth 2 -type d # Find all directories in the current location but limit the search to a depth of 
# sub-folders (depth 1 implies current folder level; depth 2 implies all sub-folders of
# the current folder)
find . -name "folder_01"  #Find an entity named folder_01 in current location
find . -type d -iname "folder_??"
find . -type f -iname "file_??.txt" # Find directories in the current location whose name matches folder_?? and files
# whose name matches file_?? (case-insensitive search).
# ? is a single wild card character - it can represent any character, and can be used
# to expand the scope of a search. ??, then, represents two wild card characters. 
# folder_?? can then mean folder_01, folder_ab, folder_4a, etc. 
# Similarly, file_??.txt can mean file_01.txt, file_07.txt, file_Ab.txt, file_8X.txt, etc.
find . -type f -iname "*.tex" # Find all files in the current location whose name matches *.tex
# * is also a wild card character - it represents any number of characters, and can be
# used to expand the scope of the search.
# *.tex can then mean latex_01.tex, a.tex, hello_world.tex, good-bye_world.tex, etc.
find ${HOME} -mtime +3 -mtime -7 # Find all entities in the ${HOME} folder that were modified a minimum of
# 3 days ago and a maximum of 7 days ago (i.e., modified between 3 and 7 days)
find . -amin -90 # Find all entities in the current location that were accessed within the last 90 minutes
find . -mmin -60 # Find all entities in the current location whose data was modified within the last
# 60 minutes
find . -size +5k -size -5M # Find all entities in the current location that have a size between 5 KB and 5 MB
  1. cp copy entities
cp file_01.txt file_06.txt # Copy file_01.txt as file_06.txt
# If file_06.txt doesn't exist, it will be created
# If it does exist, it will be overwritten
cp file_01.txt folder_02 # Copy file_01.txt under folder_02
cp file_02.txt file_03.txt folder_02 # Copy file_02.txt and file_03.txt under folder_02
cp -r folder_02 folder_06 # Copy folder_02 under folder_06
  1. mv moving entities
mv file_01.txt file_07.txt # Move file_01.txt to file_07.txt (i.e., rename)
# If file_07.txt doesn't exist, it will be created
# If it does exist, it will be overwritten
mv file_07.txt folder_02 # Move file_07.txt under folder_02
mv file_02.txt file_03.txt folder_02 # Move file_02.txt and file_03.txt under folder_02
mv folder_02 folder_07 # Move folder_02 under folder_07
  1. chmod
chmod u=rwx,g=rx,o=x RealDataFile.dat #u means user, g means group, o means others
chmod 751 RealDataFile.dat
#The user (i.e., the owner of the entity) has read (r; 4), write (w; 2) and execute (x; 1) permissions. 4 + 2 + 1 yields a numerical score of 7 for the user
#Members of the group to which the entity belongs to has read (r; 4) and execute (x; 1) permissions but no write (w; 2) permission. 4 + 0 + 1 yields a numerical score of 5 for the group
#Others (i.e., except the owner and members of the group to which the entity belongs to) have execute (x; 1) permission but no read (r; 4) or write (w; 2) permission. 0 + 0 + 1 yields a numerical score of 1 for the others
chmod u-rwx file_00.txt #Take away read, write and execute permissions from the owner
chmod u=rwx file_00.txt # Add read, write and execute permissions for the owner
chmod g+rw,o-rwx file_00.txt # Add read and write permissions for the group, and take away all permissions from others
chmod g+x,o-x folder_00 # Add execute permission for the group, and take away execute permission from others
chmod u-rwx folder_00 # Take away all permissions from the owner

chmod 044 file_00.txt # Take away read, write and execute permissions from the owner
chmod 744 file_00.txt # Add read, write and execute permissions for the owner
chmod 760 file_00.txt # Add read and write permissions for the group, and take away all permissions from others
chmod 754 folder_00 # Add execute permission for the group, and take away execute permission from others
chmod 054 folder_00 # Take away all permissions from the owner
  1. rm -r remove folders
rm -r folder_01
rm -r folder_{03,04,05,06,07}
  1. rm remove files
rm file_04.txt
rm file_{05,06}.txt
  1. How to send the output to a file instead of displaying in the Terminal?
hostname > output_redirection.txt #output the results of hostname to output_redirection.txt
echo "Hello, World" >> output_redirection.txt #append "Hello, World" to output_redirection.txt
cat output_redirection.txt

In summary,

  • Using a single > character to re-direct the output
    creates a file with the output of a given command if the file does not already exist
    overwrites a file with the output of a given command if the file does already exist
  • Using a double >> character to re-direct the output
    creates a file with the output of a given command if the file does not already exist
    appends a file with the output of a given command if the file does already exist
  1. cat It reads data from the file and gives its content as output. cat is also one command that displays the contents of a file in read-only mode.
cat <<EOF #EOF means a End of File, it's just a dummy notation, and can be replaced by other suitable string of characters.

Hello, World
This is a test for HERE-DOC notation.
EOF
# Create here_doc_notation.txt (notice >)
cat <<EOF > here_doc_notation.txt
$(hostname)

Hello, World
This is a test for HERE-DOC notation (creating a file that does not yet exist).
EOF
# Update here_doc_notation.txt (notice >>)
cat <<EOF >> here_doc_notation.txt

$(date -R)

Hello, World again.
This is another test for HERE-DOC notation (updating a file that already exists).
EOF

It can also combine files one below the other.

# Display the result in Terminal
cat ExptID.txt ExptResult.txt

# Redirect the output to a file and verify the number of lines
cat ExptID.txt ExptResult.txt > ExptIDResult_cat.txt
wc -l ExptIDResult_cat.txt
  1. echo is used to display line of text/string that are passed as an argument.
# Record a few strings/words that need to be searched in an external text file
echo "algorithm" > strings2search.txt
echo "information" >> strings2search.txt
echo "scientific" >> strings2search.txt
  1. diff and sdiff comparing files
diff SciComp_gedit.tex SciComp_vim.tex
sdiff SciComp_gedit.tex SciComp_vim.tex #the sdiff command compares the two files side by side
# (the Terminal may need to be widened).
  1. comm can compare two sorted files and print a three-column output.

First column with lines unique to the first file
Second column with lines unique to the second file
Third column with lines common to both files

  1. cmp compare files.
cmp file_05.txt file_06.txt
  1. vim open a file in vim editor
vim filename.txt
  1. patch apply patches from one version to the next, to update the code automatically
diff -u HelloWorld.c HelloWorld1.c > HelloWorld.patch
#generate a .patch file, which contains the differences between the old and new version
patch --dry-run -i HelloWorld.patch
#First, perform a dry run to observe if there are any errors without really applying the patch.
patch -b -i HelloWorld.patch
#The -b option to patch command will make a backup of the original file (with .orig extension), and then apply the patches to update HelloWorld.c

The patch command also provides an elegant way to reverse the applied changes.

patch -R -i HelloWorld.patch
  1. sed stream editor, Text manipulation without opening a file
# Replace the first occurrence of SEARCH with REPLACE in every line 
sed "s/SEARCH/REPLACE/" FILE

# Replace the second occurrence of SEARCH with REPLACE in every line
sed "s/SEARCH/REPLACE/2" FILE

# Replace every occurrence of SEARCH with REPLACE in every line 
# (i.e., global search and replace)
sed "s/SEARCH/REPLACE/g" FILE

# Replace 3rd and remaining occurrences of SEARCH with REPLACE in every line
sed "s/SEARCH/REPLACE/3g" FILE

# Replace every occurrence of SEARCH with REPLACE in every line if the line also 
# contains the string IF_THIS_IS_THERE
sed "/IF_THIS_IS_THERE/s/SEARCH/REPLACE/g" FILE

# Replace every occurrence of SEARCH with REPLACE in every line if the line
# does not contain the string IF_THIS_IS_NOT_THERE
sed "/IF_THIS_IS_NOT_THERE/!s/SEARCH/REPLACE/g" FILE

# Replace every occurrence of SEARCH_THIS with REPLACE_THIS
# or SEARCH_THAT with REPLACE_THAT in every line
sed "s/SEARCH_THIS/REPLACE_THIS/g;s/SEARCH_THAT/REPLACE_THAT/g" FILE

# Delete all empty lines
sed "/^\s*$/d" OSFlavors.txt

# Delete all comment lines (# is used as a comment character)
# A comment line is one that begins with the comment character
sed "/^#/d" OSFlavors.txt

# Delete leading white space (includes regular space and TAB characters) from every line
sed "s/^[ \t]*//" OSFlavors.txt

# Delete trailing white space (includes regular space and TAB characters) from every line
# Note: this is difficult to observe
sed "s/[ \t]*$//" OSFlavors.txt

# Delete both leading and trailing white space from every line
sed "s/^[ \t]*//;s/[ \t]*$//" OSFlavors.txt
#In the event that the search string does contain the slash (/), 
#it needs to escaped as follows (often known as the tooth-saw effect).
# Replace every occurrence of 'Windows/Mac' with 'Mac/Windows' in every line
sed "s/Windows\/Mac/Mac\/Windows/g" OSFlavors.txt
  1. awk is a pattern scanning and processing language (yes, a language). It also serves as a very powerful command to recognize a specified pattern in and extract necessary information, manipulate/alter it, if necessary, from a file.
    Within awk, $0 represents the entire line, $1 represents the first field, $2 represents the second field, …, $NF represents the last field, NR represents the line number, and so on.
# Print every field in every line (i.e., mimic the 'cat' command)
awk '{ print $0 }' CartesianCoordinates.xyz
# Print the first field in every line (i.e., the atomic symbol)
awk '{ print $1 }' CartesianCoordinates.xyz
# Print the second field in every line (i.e., the x coordinate)
awk '{ print $2 }' CartesianCoordinates.xyz
# Print the last field in every line (i.e., the z coordinate)
awk '{ print $NF }' CartesianCoordinates.xyz
# Print the line numbers
awk '{ print NR }' CartesianCoordinates.xyz
# Print the number of lines (i.e., mimic the 'wc -l' command)
awk 'END { print NR }' CartesianCoordinates.xyz
# Print every third line
awk 'NR % 3 == 0' CartesianCoordinates.xyz
# Print the first and fourth fields from every line separated by a TAB character
# This can be particularly useful to re-arrange the columns in a given file
awk '{ print $1 "\t" $4 }' CartesianCoordinates.xyz
# Print the second and third fields separated by a TAB character for line number
# greater than 5
awk '{ if (NR > 5) { print $2 "\t" $3 } }' CartesianCoordinates.xyz
# Print the first field for line number greater than 6 but less than 12
awk '{ if ((NR > 6) && (NR < 12)) { print $1 } }' CartesianCoordinates.xyz
# Print the sum of all x coordinates (i.e., the second field)
# One could extend this concept to compute the arithmetic mean of each column
# One could further extend it to compute the center of mass of this system
awk 'BEGIN { sum_x = 0 } { sum_x += $2 } END { print sum_x }' CartesianCoordinates.xyz
# Print all the z coordinates (i.e., the fourth field) and their square root using 
# the formatted print (i.e., printf) statement
awk '{ sqrt_z = sqrt($4) ; printf "%9.6f \t %9.6f\n", $4, sqrt_z }' CartesianCoordinates.xyz 
  1. cut is another useful command that can be used for extracting information from a file.
  2. wc -l can be used to show the number of lines in the file.
wc -l ExptID.txt
wc -l ExptResult.txt
  1. paste Combining files one next to the other using paste
# Redirect the output to a file and verify the number of lines
paste ExptID.txt ExptResult.txt > ExptIDResult_paste.txt
wc -l ExptIDResult_paste.txt
  1. split decompose a large file into two or more smaller files – either by size or by number of lines

Splitting by size

# The following command splits LargeDataFile.dat into many smaller files 
# (say, 1 MB each, indicated by the -b 1M option). 
# The -d option will ensure the smaller files will have numeric suffixes, 
# and the -a 4 option indicates the length of numeric suffixes 
# (i.e., from 0000 through 9999; one can adjust the value based on current needs). 
split -d -a 4 -b 1M LargeDataFile.dat LargeDataFile.dat_size_

Splitting by number of lines

# The following command splits LargeDataFile.dat into many smaller files 
#(say, 15 thousand per file, indicated by the -l 15000 option). 
# The -d -a 4 option means the same as before.
split -d -a 4 -l 15000 LargeDataFile.dat LargeDataFile.dat_lines_
wc -l LargeDataFile.dat_lines_*

Putting the files back together

cat LargeDataFile.dat_lines_* > LargeDataFile.dat_lines
cat LargeDataFile.dat_size_* > LargeDataFile.dat_size
  1. zip is a commonly used compression and file packaging utility. zip leaves behind the uncompressed file after compression.
zip LargeDataFile.dat.zip LargeDataFile.dat
  1. unzip is the corresponding decompression utility. unzip leaves behind the compressed file after decompression.
unzip folder.zip -d folder
#-d means to extract to a particular destination folder
  1. gzip is another commonly used command to compress one or more files using the Lempel-Ziv coding (LZ77). gzip does not retain the uncompressed file(s) after compression.
# The -c option with output redirection can be used to preserve the original file. 
# The --best option will take comparatively more time but results in best compression.
gzip -c --best LargeDataFile.dat > LargeDataFile.dat.gz
  1. gunzip is the corresponding decompression utility. gunzip does not retain the compressed file(s) after decompression.
  2. du -sh can be used to see the size of files.
du -sh LargeDataFile.dat LargeDataFile.dat.zip 
  1. bzip2 is another commonly used command to compress one or more files using the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding. The compressed file can be identified by the extension, .bz2.
# Without the -k option, bzip2 does not retain the uncompressed file(s) after compression 
bzip2 -k --best LargeDataFile.dat
  1. gunzip uncompress files
  2. Reading a compressed file
# read the files compressed using gzip command.
zcat HelloWorld1.c.gz
zless HelloWorld1.c.gz
zmore HelloWorld1.c.gz

# read the files compressed using bzip2 command.
bzcat HelloWorld1.c.bz2
bzless HelloWorld1.c.bz2
bzmore HelloWorld1.c.bz2
  1. tar tape archives

Creating the archive

# Create the archive (also known as the 'tar ball')
tar -cvf folder_00.tar folder_00

# Check file type of this archive
file folder_00.tar

# View the contents of the archive without extracting it
tar -tvf folder_00.tar 

#pexue compress directory to tar file
tar -czvf file.tar.gz directory
# pexue extract tar file
tar -xzvf file.tar.gz

Extracting the archive

# Move the archive to and extract in /tmp/ folder
mv folder_00.tar /tmp/
cd /tmp/
tar -xvf folder_00.tar

# Delete the archive and the folder
cd /tmp/
rm folder_00.tar
\rm -r folder_00
  1. rc file: A rc file is a configuration file (also known as a config file) containing the parameters and initial settings to configure a given application. rc means runcome, which is short for run commands
    e.g.
    ~/.bashrc contains such parameters and settings for the user’s shell
    ~/.vimrc for vim editor
    ~/.gitconfig for Git, etc.
COMMAND_01 | COMMAND_02 | COMMAND_03 | COMMAND_04 | ...
  1. Continuing commands beyond one line by using \
echo "AlgalGrowth_001.dat" | \
  awk -F '.' '{ print $1 }' | \
  awk -F '_' '{ print $2 }'
  1. history keeps track of the commands (and when they were executed) and not the output of such commands.
  2. latex and dvips should display verbose outputs en route to producing the PDF from a .tex file.
  3. tee One may use the tee command instead of explicitly copying and pasting the output in a file.
  4. script is another command that can be used for capturing the output of commands similar to tee. The -a option, much like with tee, will ensure the output is appended to a file (if it already exists) instead of overwriting it.
  5. seq generate a sequence of numbers in the desired range and increment. The -w option will ensure the sequence is appropriately zero-padded.
seq 0 10
seq 0 1 10
seq 0 2 10
seq 5 -1 0
seq 1.0 0.1 2.00

seq 0 1 10
seq -w 0 1 10
seq 0 1 100
seq -w 0 1 100
seq 1.0 0.1 2.00
seq 1.00 0.10 2.00 
  1. ${RANDOM} generate a single random number at a time
echo "${RANDOM}"
  1. shuf can be used to generate a sequence of random numbers in a given range.
# Random numbers between two integers, M and N (e.g., M=500, N=1000)
shuf -i 500-1000 -n 3
  1. a combination of seq sort head commands can also be used to generate a sequence of random numbers in a given range.
# Random numbers between two integers, M and N (e.g., M=500, N=1000)
seq 500 1 1000 | sort -R | head -n 7
  1. Basic arithmetic
# For integers only
x=1
y=$((x + 100))
echo "x = ${x}; y = ${y}"
printf "x = %d; y = %d\n" "${x}" "${y}"
printf "x = %3d; y = %3d\n" "${x}" "${y}"
printf "x = %03d; y = %03d\n" "${x}" "${y}"

# For non-integer, using | bc
x=1
echo "${x} + 100.50" | bc
y=$(echo "${x} + 100.50" | bc)
echo "x = ${x}; y = ${y}"
# Observe the difference in results with and without the -l option
# scale = N specifies the number of digits after the decimal point
x=17
y=3
echo "${x} / ${y}" | bc
echo "${x} / ${y}" | bc -l
echo "scale = 5; ${x} / ${y}" | bc
  1. Set up functions
    form
# meaningful_name
# A brief description of what this function does
function meaningful_name() {
  COMMANDS
}
export -f meaningful_name

e.g.

function hello_world() {
  echo "Hello, World"
}
  1. Function with arguments
# hello_world
# Prints "Hello, World" and a string supplied as the first argument
function hello_world() {
  local user_name="$1"
  echo "Hello, World"
  echo "Username is ${user_name}"
}
export -f hello_world

when call the function, use the following command

meaningful_name ARGUMENT_1 ARGUMENT_2 ... ARGUMENT_N
  1. scp files and folders need to be transferred from the local workstation (source) to colossus.it (destination).
# Transfer file_${USER}.txt to under /tmp/ in colossus
# Enter Michigan Tech ISO password when prompted
scp file_${USER}.txt ${USER}@colossus.it.mtu.edu:/tmp/
  1. set softlink
ln -s make-scp.inc_ch_modified make.inc
#ln means link
#-s means soft
  1. when search in a script /thewordyouwanttosearch, you can use set ic before search to ignore case letter.
  2. To see a folder size
du -sh folder_name
  1. Add “#” to every line in a script
:%s/^/#/g
# ^ means the start of every line
# g means global
  1. change directory name
mv old_name new_name
  1. In script
# show line number
:set number

# no auto wrap
:set nowrap

# go to bottom
shift + g

# delete upwards 10 lines from the present line
10 + d + uprow

# add ! at the start of each line from line 785 to 807
:785,807 s/^/!/
  1. the free nodes on pexue.q
qnodes-map pexue.q
  1. check HPC storage space
df -h /pexue5
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值