Learning Contents Record

2023/01/03

Epoch vs Batch Size vs Iterations

Epochs: One Epoch is when an ENTIRE dataset is passed forward and backward through the neural network only ONCE.

Batch: divide a whole dataset into Number of Batches or sets or parts.

Batch Size: Total number of training examples present in a single batch.

Iterations: the number of batches needed to complete one epoch.

For example, We can divide the dataset of 2000 examples into batches of 500 then it will take 4 iterations to complete 1 epoch.

With Statement in python

Together with context manager, with statement is used in exception handling to make the code cleaner and much more readable, and simplifies the management of common resources like file streams.

Take an example of the construction of a context manager and the use of with statement:


# a simple file writer object

class MessageWriter(object):
    def __init__(self, file_name):
        self.file_name = file_name
    
    def __enter__(self):
        self.file = open(self.file_name, 'w')
        return self.file

    def __exit__(self, *args):
        self.file.close()

# using with statement with MessageWriter

with MessageWriter('my_file.txt') as xfile:
    xfile.write('hello world')

The statement after with can be regarded as a common line of code, but since it is one kind of context manager, the __enter__ method needs to be run when executing, and the __exit__ mechod needs to be run at the end of the entire code block.

Context Manager

The interface of __enter__() and __exit__() methods which provides the support of with statement in user defined objects is called Context Manager.

Two ways to construct a context manager:

  • Class based context manager

  • Contextlib module based decorator

Yield Statement and Generator Function

Yield Statement: The yield statement suspends a function's execution and sends a value back to the caller, but retains enough state to enable the function to resume where it left off. When the function resumes, it continues execution immediately after the last yield run. This allows its code to produce a series of values over time, rather than computing them at once and sending them back like a list.

Generator-Function: A generator-function is defined like a normal function, but whenever it needs to generate a value, it does so with the yield keyword rather than return. If the body of a def contains yield, the function automatically becomes a generator function.

Decorators

Decorators: It allows programmers to modify the behaviour of a function or class. Decorators allow us to wrap another function in order to extend the behaviour of the wrapped function, without permanently modifying it.

Iterators and Iterable

Iterator in Python is an object that is used to iterate over iterable objects like lists, tuples, dicts, and sets. The iterator object is initialized using the iter() method. It uses the next() method for iteration.

First class functions

First class objects: First class objects in a language are handled uniformly throughout. They may be stored in data structures, passed as arguments, or used in control structures.

First class functions: A programming language, python for example, is said to support first-class functions if it treats functions as first-class objects.

properties of first class functions:

  • A function is an instance of the Object type.

  • You can store the function in a variable.

  • You can pass the function as a parameter to another function.

  • You can return the function from a function.

  • You can store them in data structures such as hash tables, lists, …

2023/01/04

require_grade and torch.set_grad_enabled()

require_grade: In PyTorch, the requires_grad attribute of a tensor specifies whether gradients with respect to the tensor should be computed and stored in the tensor's .grad attribute during the backward pass of a training iteration. This is used to enable or disable gradient computation for specific tensors.

torch.set_grad_enabled(): This context manager is used to prevent calculating gradients in the following code block. It is used to evaluate the model and doesn't need to call backward() to calculate the gradients and update the corresponding parameters.

2023/01/19

grep command

grep is a command-line utility for searching plain-text data sets for lines that match a regular expression. Its name comes from the ed command g/re/p (globally search for a regular expression and print matching lines), which has the same effect. grep was originally developed for the Unix operating system, but later available for all Unix-like systems and some others such as OS-9.

X-server

Most Ubuntu systems come with a built-in X-server, which is the software component that handles the graphical display of applications. X-server is the component that allows applications to display their graphical output on the screen, it is required to run GUI-based applications on Linux systems.

X-server is included as part of the standard Ubuntu installation, so most Ubuntu systems will have an X-server installed by default. However, some Ubuntu servers or minimal installations may not have the X-server installed, as it may not be required for the specific use case.

You can check if your Ubuntu system has an X-server installed by running the following command in a terminal:

dpkg -l | grep xserver-xorg

2023/01/20

Common Abreviations

i. e.: Abreviation of "id est", which means "that is (to say)". /ɪd ɛst/

e. g.: Abreviation of "exempli gratia", which means "for example". /ɪɡˈzempli ˈɡreiʃiə/

Lookup Table and Colormap

Lookup table (LUT) is one kind of data structure in computer science. It's an array that replaces runtime computation with a simpler array indexing operation. The process is termed as "direct addressing".

In the aspect of image processing, an LUT is simply a table of cross-references linking index numbers to output values to determine the colors and intensity values with which a particular image will be displayed, and in this context the LUT is often called simply a colormap.

Advantages of using LUTs:

  • Save storage space. The index number can be made to use fewer bits than the output value.

  • Experiment easily with different colors labling schemes.

Disadvantages of using LUTs:

  • Introduce additional complexity into an image format. It is usually necessary for each image to carry around its own colormap, and this LUT must be continually consulted whenever the image is displayed or processed.

  • Color quantization is lossy.

Slice Thickness and Interval

2023/01/31

CT Numbers, Hounsfield Unit, and Gray Scale

CT Windowing

  • Increasing the window level will DECREASE the brightness of the image;

  • Increasing the window width will DECREASE the contrast of the image.

Camera Space Axis

Specify a virtual camera:

  • Specify the location of the camera.

  • Specify a location the camera is looking at. (center location)

  • Specify the general direction of "up" for the camera.

We can see in the webgl program, the up vector is not always pointed out directly above the camera. The camera will first meet the limit of pointing to the center location. As long as the up vector is pointed out from the top of the camera, not to be directly above is permitted.

But when it is possible, the camera will roll to follow the rotation of up vector, and keep the up vector directly above the camera.

2023/02/04

Bytes Object

A computer can only store data in the form of bytes. In order to store anything on your computer, you must convert it in a form that a computer can understand and store.

In Python, a string object is a series of characters that make a string. In the same manner, a byte object is a sequence of bits/bytes that represent data. Strings are human-readable while bytes are computer-readable.

The bytes object is one of the core built-in types for manipulating binary data. A bytes object is an immutable sequence of single byte values. Each element in a bytes object is a small integer in the range of 0 to 255.

Bit, Byte and Binary

bit: Short for binary digit, the smallest unit of information on a machine. A single bit can hold only one of two values: 0 or 1.

byte: Abbreviation for binary term, a unit of storage capable of holding a single character. On almost all modern computers, a byte is equal to 8 bits.

.mrb Files in 3D Slicer

"mrb" is the abbreviation of Medical reality bundles.

A .mrb file is a archive file that contains the mrml scene file and all data for loading into Slicer4. The .mrb file is actually a .zip file but with a different file extension. So, if you rename archive.mrb to be archive.zip you can look at (or modify) the contents using normal tools.

The code in slicer that handles reading .mrb files performs an unzip into a temp directory and then reads the first .mrml file it finds there. Because of this, for several of the older .zip files it's possible to open them directly in slicer by just changing the file extension from .zip to .mrb.

Volume vs. Model

In 3D computer graphics, the terms "volume" and "model" are used to describe different types of data structures used to represent 3D objects.

A volume is a 3D data structure that represents a continuous space and is often used to represent medical imaging data, such as CT or MRI scans. Volumes are defined as a set of voxels, which are 3D pixels with a specific value or intensity. Volumes can be used to visualize the internal structure of an object, as well as to perform operations such as segmentation, registration, and analysis.

A model, on the other hand, is a 3D representation of an object using polygons, surfaces, or other types of geometric shapes. Unlike volumes, models are not defined as a continuous space and instead consist of a set of discrete points and shapes. Models are often used to represent objects that have a more complex shape or structure, such as bones, organs, or vehicles.

In the context of medical imaging, a volume can refer to a CT, MRI, or other 3D imaging data, while a model can be used to represent a segmented structure or 3D surface mesh derived from the imaging data.

In summary, volumes contain raw imaging data while models contain a higher-level representation of the data, often used for visualizing or manipulating the data in a more user-friendly way.

Geometric Representation of RGB

Since colors are usually defined by three components, not only in the RGB model, but also in other color models such as CIELAB and Y'UV, among others, then a three-dimensional volume is described by treating the component values as ordinary Cartesian coordinates in a Euclidean space. For the RGB model, this is represented by a cube using non-negative values within a 0–1 range, assigning black to the origin at the vertex (0, 0, 0), and with increasing intensity values running along the three axes up to white at the vertex (1, 1, 1), diagonally opposite black.

An RGB triplet (r,g,b) represents the three-dimensional coordinate of the point of the given color within the cube or its faces or along its edges. This approach allows computations of the color similarity of two given RGB colors by simply calculating the distance between them: the shorter the distance, the higher the similarity. Out-of-gamut computations can also be performed this way.

Clipping Range

Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest.

Clip regions are commonly specified to improve render performance. A well-chosen clip allows the renderer to save time and energy by skipping calculations related to pixels that the user cannot see. Pixels that will be drawn are said to be within the clip region. Pixels that will not be drawn are outside the clip region. More informally, pixels that will not be drawn are said to be "clipped."

A rendering algorithm only draws pixels in the intersection between the clip region and the scene model. Lines and surfaces outside the view volume (aka. frustum) are removed.

A view frustum, with near- and far- clip planes. Only the shaded volume is rendered.

A frustum is the portion of a solid (normally a pyramid or a cone) that lies between two parallel planes cutting this solid.

Actor vs. Prop

vtkProp is an abstract superclass for any objects that can exist in a rendered scene (either 2D or 3D). Instances of vtkProp may respond to various render methods (e.g., RenderOpaqueGeometry()). vtkProp also defines the API for picking, LOD manipulation, and common instance variables that control visibility, picking, and dragging.

vtkActor is used to represent an entity in a rendering scene. It inherits functions related to the actors position, and orientation from vtkProp3D. The actor also has scaling and maintains a reference to the defining geometry (i.e., the mapper), rendering properties, and possibly a texture map. vtkActor combines these instance variables into one 4x4 transformation matrix as follows: [x y z 1] = [x y z 1] Translate(-origin) Scale(scale) Rot(y) Rot(x) Rot (z) Trans(origin) Trans(position)

Gaussian Blur - Standard Deviation vs. Radius 

Standard deviation is a measure of the spread of the distribution. It is often denoted by the symbol sigma (σ) and describes how much the values in the distribution deviate from the mean value. A larger standard deviation indicates a wider spread of values, while a smaller standard deviation indicates a more concentrated cluster of values around the mean. In image processing, Gaussian filters are often used to smooth images and reduce noise, where the standard deviation determines the extent of the smoothing effect.

Radius is the size of the kernel used for smoothing. The kernel is a matrix of values that define the weights of the neighboring pixels that are used in the smoothing process. The radius is typically specified as a number of pixels from the center pixel, and the size of the kernel is defined as twice the radius plus one. For example, a radius of 2 will produce a 5x5 kernel (2*2+1) that is used for smoothing. The larger the radius, the more neighboring pixels are used in the smoothing process, resulting in a smoother image.

In a word, the size of Gaussian kernel is controled by the radius, while the specific values in the kernel is controled by the standard deviation.

Marching Cubes Algorithm

Marching cubes algorithm is used to extract a polygonal mesh of an isosurface from a three-dimensional discrete scalar field (the elements of which are sometimes called voxels). Check here and here for implementation details.

Flow Chart

Scalar field is a function associating a single number to every point in a space.

Volumetric meshes: surface + volume

Polygon meshes: surface only (volume is implicit)

low poly triangle mesh

Bash vs. Shell

  • "Bash" is short for "Bourne-Again SHell", which is a type of shell that is the default command-line interface in Ubuntu and many other Linux distributions. Bash is a more feature-rich and modern shell compared to the original Bourne shell (sh).

  • "Shell" is a term used to refer to any command-line interface (CLI) in Unix-like operating systems, including Bash. Other shells commonly used in Unix-like systems include the Korn shell (ksh), the C shell (csh), and the Z shell (zsh). In Ubuntu, "Shell" might also refer to the graphical shell (i.e., the user interface) of the desktop environment being used (e.g., GNOME Shell in Ubuntu with the default GNOME desktop environment).

"-f", "-n" and "-d" in Shell Script

The "-f" flag is used to check if a file exists. For example, the following command checks if the file "myfile.txt" exists and prints "File exists" if it does.

if [-f "myfile.txt"]; then
    echo 'File exists'
fi

The "-n" flag is used to check if a string is not empty. For example, the following command checks if the variable "str" is not empty and prints "String is not empty" if it is not.

if [-n "$str"]; then
    echo 'String is not empty'
fi

The "-d" flag is used to check if a directory exists or not. For example, the following command checks if the directory exixts and prints the inspection results.

if [ -d "/path/to/directory" ]; then
    echo "Directory exists."
else
    echo "Directory does not exist."
fi

Bash Script Syntax

eval: evaluate and execute a command passed to it as a string. The string is first expanded, and then the resulting command is executed by the shell. For example, if a string variable command_str contains the string ls -l /, running eval "$command_str" will execute the command ls -l /.

\: backslash will suppress alias expansion, ie it executes the original command and makes sure that alias version does not run.

TTY (Teletypewriter)

TTY is the shorthand for Teletypewriter. Nowadays, TTY's are text-only terminals commonly used as a way to get access to the computer to fix things.

CTRL + ALT + F1 – Lockscreen
CTRL + ALT + F2 – Desktop Environment
CTRL + ALT + F3 – TTY3
CTRL + ALT + F4 – TTY4
CTRL + ALT + F5 – TT5
CTRL + ALT + F6 – TTY6
CTRL + D -- logout

Reference

$() vs. ${}

The expression $(command) is a modern synonym for `command` which stands for command substitution; it means run command and put its output here.

echo "Today is $(date). A fine day."

will run the date command and include its output in the argument to echo.

By contrast, ${variable} is just a disambiguation mechanism, so you can say ${var}text when you mean the contents of the variable var, followed by text (as opposed to $vartext which means the contents of the variable vartext).

Interactive, Non-interactive, Login, Non-login Shells

$ echo $-
himBHs
$ shopt login_shell
login_shell    	on

-> interactive login shell.

Bash -> interactive non-login shell

Another way to check the status of login:

echo $0
bash # Non-login
echo $0
-bash # Login

Order of Loading Start-up Files of Bash

As said in bash man page:

When invoked as an interactive login shell:

Bash first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable.

When invoked as an interactive non-login shell:

Bash reads and executes commands from ~/.bashrc, if that file exists.
For some specific systerms like Ubuntu, it will read /etc/bash.bashrc before ~/.bashrc, as explained here.

As described next:

When you first log in to a Unix system from a terminal, the system normally starts a login shell. A login shell is typically the top-level shell in the “tree” of processes that starts with the init process. Many characteristics of processes are passed from parent to child process down this “tree” — especially environment variables, such as the search path. The changes you make in a login shell will affect all the other processes that the top-level shell starts — including any subshells.
So, a login shell is where you do general setup that's done only the first time you log in — initialize your terminal, set environment variables, and so on. […]

More specificly, the start-up process of our commonly used terminals in Ubuntu can be divided into two parts:

The first part is the login of system, which is actually the process of starting up a login shell. During this part, bash will read and execute the command in /etc/profile and ~/.profile. That's when the system-wide environment variables are loaded.

The second part is when we open a new terminal. The terminal opened this time is one kind of Non-login shell, which will read and execute the command in /etc/bash.bashrc and ~/.bashrc. But also, as its a child process of the login shell, it will inherit all the characteristics obtained during the first part.

In a word, the terminal we often use contains environment variables configured in /etc/profile, ~/.profile, /etc/bash.bashrc and ~/.bashrc. If we want some changes in these files to take effect, we must reboot the system for /etc/profile and ~/.profile, but only need to open a new terminal for /etc/bash.bashrc and ~/.bashrc.

The whole process is shown in the figure below:

PATH vs. PYTHONPATH vs. sys.path

PATH is an environment variable in Linux and other Unix-like operating systems that tells the shell which directories to search for executable files (i.e., ready-to-run programs) in response to commands issued by a user.

PYTHONPATH is also an environment variable that you set before running the Python interpreter, containing directories that should be searched for modules and packages when using import.

sys.path is an automatically created list when starting a Python interpreter that contains all of directories it will use to search for modules and packages when importing. If PYTHONPATH is set, Python will include its directories in sys.path for searching.

Solve Ubuntu Crash

  1. Press CTRL + ALT + F3~F6 to enter tty mode;

  1. enter the username and password to login;

  1. enter top to show system processes dynamically;

  1. remember the PID of the process that may cause the crash;

  1. enter kill PID to shut down;

  1. Press CTRL + ALT + F2 to return to the desktop environment.

C++ Syntax

#include directive:

Include header files. Header files have extensions like .h, .hpp, or .hxx, or have no extension at all like in the C++ standard library and other libraries' header files (like Qt). The extension doesn't matter for the C++ preprocessor, which will literally replace the line containing the #include directive with the entire content of the included file.

The Build Pipeline: Preprocess, Compile, and Link

Preprocess:

The first step that the compiler will do on a source file is to run the preprocessor on it. Only source files are passed to the compiler (to preprocess and compile it). Header files aren't passed to the compiler. Instead, they are included from source files. During the preprocessing phase, each header file can be opened multiple times, while source files, on the other hand, are opened only once by the compiler (and preprocessor), when they are passed to it. For each C++ source file, the preprocessor will build a translation unit by inserting content in it when it finds an #include directive at the same time that it'll be stripping code out of the source file and of the headers when it finds conditional compilation blocks whose directive evaluates to false. It'll also do some other tasks like macro replacements.

(the full name of cpp is "C++ preprocessor")

cpp hello.cpp > hello.i
Compile:

Each .cpp file has its own independent compiling unit, which converts preprocessed code to binary code and output the socalled object file. Since they are independent with each other, which means they have no idea what is going on in onther object files, they usually lack some implementation details of classess, functions and so on. That's wht these object files are not executable.

Actually, this precess can also be divided into two parts in detail, which are Compilation and Assembly.

During the compilation, the g++ compiler compiles the preprocessed source code to assembly language, whose output is hello.s.

g++ -S hello.i

During the assembly, the assembler as converts the assembly to machine code, which is the object file metioned above.

as -o hello.o hello.s
Link:

Use linker to link together a bunch of object files (.o files) into a binary executable. This includes both the object files that the compiler created from your source code files as well as object files that have been pre-compiled for you and collected into library files. These files have names which end in .a or .so, and you normally don't need to know about them, as the linker knows where most of them are located and will link them in automatically as needed.

The linker ld links the object code with the library code to produce an executable:

ld -o hello hello.o ...libraries...

The libraries argument is so long and contains both .o and .so files. . If you are interested to find out, you can run the command g++ -Q -v -o hello hello.cpp and take a look at the last line where g++ invokes collect2. For exmaple:

Generally, use g++ to make a project only need two steps.

  1. use g++ -c to convert source files to objective files;

  1. use g++ to link objective files

One simple MakeFile for demonstration:

hello: main.o age.o
    g++ main.o age.o -o hello
    
main.o: main.cpp
    g++ -c main.cpp
    
age.o: age.cpp
    g++ -c age.cpp
    
clean:
    rm *.o hello
A great article and video to demonstrate this.
code and process reference.

Header Files vs. Static Libraries vs. Shared Libraries

Dependencies of a project can be divided into two parts (folders) —— include and library.

The include folder contains header files.

The library (lib) folder contains assosiated object files, static libraries (.a) or shared libraries (.so).

When compiling a project, we should specify the path of headers to compiler, and also specify the path of libraries to linker.

Static Linking

Dynamic Linking: can be divided into "half-dynamic linking", which compile .a and .so files simultaneously to use the .a file to locate contents in .so, and "fully-dynamic linking", which only has a .so file and the executable has no idea about its existense so that the executable files will still execute successfully even though the .so is not loaded.

Static Library: a collection of object files that have been archived together into a single file. When input into the linker, they are first unpacked into individual object files that are then linked together with the object files generated from the source code to produce the final executable or shared library. When a program is linked with a static library, the object code from the library is copied into the final executable or shared library. This means that the resulting program contains a complete copy of the code from the static library, even if not all of the functions in the library are used

Shared Library: also a collection of object files, but it is designed to be loaded at run-time by the operating system. When a program is linked with a shared library, it does not include the object code from the library in the final executable. Instead, the program contains references to the functions in the shared library, and the operating system loads the library into memory when the program is run.

Utility of Makefile and CMakeLists.txt

Makefile entries generally have the following format:

target: dependencies
<tab>    command
cmake_minimum_required(VERSION 3.10.0)

#project name
project (hello_cmake)

add_executable(hello_age main.cpp age.cpp)

CMake is cross-platform, while Make is only used for the Unix-based platform.

CMake comes with a GUI that can be used to configure projects, while Make is purely a command-line tool.

Reference 1

Reference 2

Different Files in Project

Header file: resides in "include" directory and contains declaration of functions, classes, variables and so on.

Implementation file: resides in "source" directory, contains specific implementation of functions, classes and so on. It usually has the same name as corresponding header file, but with the suffix cpp.

Main file: resides in "source" directory, and contains the main function of the project (including some reference to functions, classes, and variables declared in header files and implemented in implementation files). It is usually named with main.cpp.

Use Cmake in Visual Studio Code

Create a CMake project:

  1. create a empty folder

  1. CMake: Quick Start

  1. CMake: Select a Kit

  1. CMake: Select a Variant

  1. CMake: Configure (=> generates build files)

  1. CMake: Build

Kit: A kit represents a toolchain, which is the compiler, linker, and other tools used to build your project.

Use VTK With CMake

This file is used by CMake when finding VTK:

/opt/VTK-9.1.0/VTK-build/vtk-config.cmake

which includes "/opt/VTK-9.1.0/VTK-build/lib/cmake/vtk-9.1/vtk-config.cmake"

which is also located in "/usr/local/lib/cmake/vtk-9.1/vtk-config.cmake"

Pointers to Arrays

Source code:


#include <iostream>
 
using namespace std;
 
int main ()
{
   const int my_int[] = {1, 2, 3};
   cout << my_int << endl;
   const char my_char[] = "string";
   cout << my_char << endl;
   return 0;
}

Compile and output:

$ ./test
$ 0x7ffdad9771f4
$ string

Explaination:

In C++, an array name is implicitly converted to a pointer to the first element of the array when it is passed to a function or used in an expression.

In the code above, my_int is an array of ints, and when you pass it to cout, it is implicitly converted to a pointer to the first element of the array. Therefore, the first output is the address of the first element of the array.

On the other hand, my_char is an array of chars, but it is also a string (a sequence of characters terminated by a null character \0). When you pass it to cout, it is treated as a C-style string and the whole string is printed, not just the address of the first element.

Volume Rendering

Volume Rendering: the input itself is already a volume. what the "volume rendering" does is using a ray-casting algorithm, where each voxel is treated as a sample along a ray, to retrieve the detaild information inside the volume, and the accumulated samples are used to determine the final color and opacity of the pixel. For short, it converts a 3D volume to a 2D image (computer screen) for display. It works like a mapper.

Hue, Saturation, Brightness

HSB: be used when picking a color.

  • Hue: purly a color. The base raw actual color.

  • Saturation: how true the color is; how far the color is away from gray (white).

  • Brightness: the brighter, the closer to the white; how far the color is away from black.

Color Wheel

HSL: be used when changing an image.

Luminance:

Wrap and Pad

Wrap: use values from the opposite edge of the image, as if the image were a seamless "wrap-around" texture

Pad: Add pixels or voxels at the edge of an image or a volume

Points vs Cells

Each pixel in the image has a scalar value (e.g. grayscale intensity or RGB values). In vtkImageData, these scalar values are stored as Point data. The Point data represents the scalar value at each point in the 2D space defined by the pixel coordinates.

However, when we want to visualize the image as a polygonal mesh or a volume, we need to convert the Point data to Cell data. In the case of a 2D image, this involves converting each pixel into a square cell with the pixel's scalar value as the cell data. The scalar value of the cell is determined by various interpolation methods, such as nearest-neighbor or linear interpolation.

So, in this example, the Point data represents the scalar value at each pixel location, while the Cell data represents the scalar value over a region of space (i.e. the square cell).

vtkPointDataToCellData is a filter that transforms point data (i.e., data specified per point) into cell data (i.e., data specified per cell). By default, the method of transformation is based on averaging the data values of all the points defining a particular cell. Optionally (by enabling CategoricalData), histograming can be used to assign the cell data. For large datasets with several cell data arrays, the filter optionally supports selective processing to speed up processing. Optionally, the input point data can be passed through to the output as well.

Bezier Curve vs. B-splines

  • Bezier curves are smooth curves defined using control points.

A degree n Bezier curve is defined by n + 1 control points Pi.

  • B-splines (short for Basis splines) use several Bezier curves joined end on end.

A k degree B-spline curve defined by n + 1 control points will consist of n - k + 1 Bezier curves.

Create Remote Linux Desktop

  1. Initiate

# login
$ ssh root@49.7.155.156
# add mirror sources
$ vim /etc/apt/sources.list
# update
$ apt update
$ apt dist-upgrade

Tsinghua mirror sources:


deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-proposed main restricted universe multiverse

2. Configure the deep learning environment

Install nvidia driver
# install nvidia driver
$ apt install nvidia-driver-525-server 
# verify
$ nvidia-smi
Install CUDA Toolkit
# install CUDA Toolkit
$ wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
$ sudo sh cuda_11.8.0_520.61.05_linux.run
# configure environment variables
$ vim ~/.bashrc
$ source ~/.bashrc
# verify
$ nvcc -V

Install CUDA Toolkit: Go to this page and select runfile (local).

Configure environment variables: refer to this page.

*To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-11.8/bin

export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
Install cuDNN

Go to this page and select "Local Installer for Linux x86_64 (Tar)".

$ cd ~/Downloads/
$ scp cudnn-linux-x86_64-8.8.1.3_cuda11-archive.tar.xz root@49.7.155.156:~/

Change to the remote work station:

# extract
$ tar -xvf cudnn-linux-x86_64-8.8.1.3_cuda11-archive.tar.xz 
# copy
$ cp cudnn-linux-x86_64-8.8.1.3_cuda11-archive/include/* /usr/local/cuda/include/
$ cp cudnn-linux-x86_64-8.8.1.3_cuda11-archive/lib/* /usr/local/cuda/lib64/
Configure Python environment
# install anaconda
$ wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-5.3.1-Linux-x86_64.sh
$ sh Anaconda3-5.3.1-Linux-x86_64.sh
$ source ~/.bashrc
# update
$ conda update -n base -c defaults conda
# create an environment
$ conda create -n kaggle python=3.10
# prevent conda's base environment from being activated on startup
$ conda config --set auto_activate_base false
Install PyTorch

Refer to this page.

$ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Verify.

(kaggle) root@lyl:~# python
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'2.0.0+cu118'
>>> torch.cuda.is_available()
True

3. Create Multiple Users

# create user
$ adduser <username>
# allow other users to run sudo
$ adduser <username> sudo

4. Notice

  • The newer the driver version, the better.

  • First check the PyTorch requirements, then determine the specific version of CUDA and cuDNN.

  • PyTorch 2.0 no longer support python 3.7 as seen in this page.

  • To make environment configurations work for all users, cut them into /etc/bash.bashrc from ~/.bashrc.

Image Registration

Usage of dpkg

To see which files are in an installed package.

dpkg -L <package>

To find out which package a specific file came from.

dpkg -S <file>

To list the content of a .deb-file.

dpkg -c <file.deb>

In order to evaluate what pre/post-install actions are taken these files need to be extracted and manually viewed.

dpkg -e <file.deb> [folder] 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

pi_kaqiu

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值