Java theory and practice

1. Multiply-Thread

Locks offer two primary features: mutual exclusion and visibility. Mutual exclusion means only one thread at a time may hold a given lock, so only one thread at a time will be using the shared data. Visibility is to ensure that changes made to shared data prior to releasing a lock are made visible to another thread that subsequently acquires that lock

 

you must ensure that your threads spend most of their time actually doing work, rather than waiting for more work to do, or waiting for locks on shared data structures

 

An algorithm is said to be wait-free if every thread will continue to make progress in the face of 

 

arbitrary delay (or even failure) of other threads. By contrast, a lock-free algorithm requires only that some thread always make progress. (Another way of defining wait-free is that each thread is guaranteed to correctly compute its operations in a bounded number of its own steps, regardless of the actions, timing, interleaving, or speed of the other threads. This bound may be a function of the number of threads in the system; for example, if ten threads each execute the CasCounter.increment() operation once, in the worst case each thread will have to retry at most nine times before the increment is complete.)

 

A common technique for tuning the scalability of a concurrent application that is experiencing contention is to reduce the granularity of the lock objects used, in the hopes that more lock acquisitions will go from contended to un-contended. The conversion from locking to atomic variables achieves the same end -- by switching to a finer-grained coordination mechanism, fewer operations become contended, improving throughput. 

 

1.0 Thread


1.1 synchronized/wait.notify

 

1.2 volatile

Due to the semantics of some programming languages, the code generated by the compiler is allowed to update the shared variable to point to a partially constructed object before A has finished performing the initialization.

 

Before 1.5., it guarantee reference flush; After 1.5, it guarantee change is visible to other thread before reference is flushed

 

  1. (In all versions of Java) There is a global ordering on the reads and writes to a volatile variable. This implies that every thread accessing a volatile field will read its current value before continuing, instead of (potentially) using a cached value. (However, there is no guarantee about the relative ordering of volatile reads and writes with regular reads and writes, meaning that it's generally not a useful threading construct.)
  2. (In Java 5 or later) Volatile reads and writes establish a happens-before relationship, much like acquiring and releasing a mutex.
  3. Also on earlier JDK's (pre 1.5) / memory models, wasn't there an issue in that if you had a volatile reference to an object accessing it did not necessarily flush non synchronized writes on the object itself i.e. you could see the correct reference / object but not necessarily the updated fields of the object (i.e. non synchronized writes wre not guaranteed to be visible relative to the volatile access), where as the synchronized version would always work
  4. JDK 5 volatile was only guaranteed (errr, supposedly guaranteed) to apply to access to the volatile field itself. Now it's been strengthened so that it effectively creates a memory flush much like synchronization does. This is discussed in the Java memory model FAQ.

 

 

Listing 2. Using a volatile variable as a status flag

 

Listing 3. Using a volatile variable for safe one-time publication 

 

Listing 6. Combining volatile and synchronized to form a "cheap read-write lock"

 

 

 

http://en.wikipedia.org/wiki/Volatile_variable

http://en.wikipedia.org/wiki/Double-checked_locking

http://www.ibm.com/developerworks/java/library/j-jtp06197.html

 

1.3 lock

 

 

1.4 Atomic

http://www.ibm.com/developerworks/java/library/j-jtp11234/

The first processors that supported concurrency provided atomic test-and-set operations, which generally operated on a single bit. The most common approach taken by current processors, including Intel and Sparc processors, is to implement a primitive called compare-and-swap, or CAS.

 

 

The natural way to use CAS for synchronization is to read a value A from an address V, perform a multistep computation to derive a new value B, and then use CAS to change the value of V from A to B. The CAS succeeds if the value at V has not been changed in the meantime. Instructions like CAS allow an algorithm to execute a read-modify-write sequence without fear of another thread modifying the variable in the meantime, because if another thread did modify the variable, the CAS would detect it (and fail) and the algorithm could retry the operation. Listing 3 illustrates the behavior (but not performance characteristics) of the CAS operation, but the value of CAS is that it is implemented in hardware and is extremely lightweight (on most processors):

 

 

 

 

https://www.ibm.com/developerworks/java/library/j-jtp11234/

 

1.5 CountDownLatch Semaphore CyclicBarrier

CountDownLatch is  a synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes. E.g. in a master thread, you spawn several worker thread and wait util those several worker thread finish

 

Semaphore: A counting semaphore. Conceptually, a semaphore maintains a set of permits. Each acquire() blocks if necessary until a permit is available, and then takes it. Each release() adds a permit, potentially releasing a blocking acquirer. However, no actual permit objects are used; the Semaphore just keeps a count of the number available and acts accordingly.

 

Releases a permit, increasing the number of available permits by one. If any threads are trying to acquire a permit, then one is selected and given the permit that was just released. That thread is (re)enabled for thread scheduling purposes.

There is no requirement that a thread that releases a permit must have acquired that permit by calling acquire(). Correct usage of a semaphore is established by programming convention in the application.

In computer science, a semaphore is a protected variable or abstract data type that constitutes a classic method of controlling access by several processes to a common resource in aparallel programming environment. A semaphore generally takes one of two forms: binary and counting

is used for limits resources control

 

 

CyclicBarrier  

A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point. CyclicBarriers are useful in programs involving a fixed sized party of threads that must occasionally wait for each other. The barrier is called cyclic because it can be re-used after the waiting threads are released.

 

2. Collections

Interface

   Collection, List, Set, Queue, Deque | Map

 

Collection: The root interface in the collection hierarchy. A collection represents a group of objects, known as its elements

List: An ordered collection (also known as a sequence). 

Set: A collection that contains no duplicate elements. More formally, sets contain no pair of elements e1 and e2 such that e1.equals(e2), and at most one null element.

 

 

Queue: A collection designed for holding elements prior to processing  but do not necessarily, order elements in a FIFO (first-in-first-out) manner

Deque: A linear collection that supports element insertion and removal at both ends. The name deque is short for "double ended queue" and is usually pronounced "deckn

 

Map: An object that maps keys to values. A map cannot contain duplicate keys; each key can map to at most one value.

 

 

 

Class

HashMap: Hash table based implementation of the Map interface. This implementation provides all of the optional map operations, and permits null values and the nullkey. (The HashMap class is roughly equivalent to Hashtable, except that it is unsynchronized and permits nulls.) 

 

Hashtable: This class implements a hashtable, which maps keys to values. Any non-null object can be used as a key or as a value.Unlike the new collection implementations, Hashtable is synchronized.

 

WeakHashMap: A hashtable-based Map implementation with weak keys. An entry in a WeakHashMap will automatically be removed when its key is no longer in ordinary use. The value objects in a WeakHashMap are held by ordinary strong references. Thus care should be taken to ensure that value objects do not strongly refer to their own keys, either directly or indirectly. One way to deal with this is to wrap values themselves within WeakReferences before inserting, as in: m.put(key, new WeakReference(value)), and then unwrapping upon each get.

IdentityHashMap: This class implements the Map interface with a hash table, using reference-equality in place of object-equality when comparing keys (and values).

 

LinkedHashMap: Hash table and linked list implementation of the Map interface, with predictable iteration order. This implementation differs from HashMap in that it maintains a doubly-linked list running through all of its entries. This linked list defines the iteration ordering, which is normally the order in which keys were inserted into the map (insertion-order). Note that insertion order is not affected if a key is re-inserted into the map. (A key k is reinserted into a map m if m.put(k, v) is invoked when m.containsKey(k) would return true immediately prior to the invocation.)

TreeMap: A Red-Black tree based NavigableMap implementation.

 

 

LinkedList: Linked list implementation of the List interface. Implements all optional list operations, and permits all elements (including null). In addition to implementing the List interface, the LinkedList class provides uniformly named methods to getremove and insert an element at the beginning and end of the list. These operations allow linked lists to be used as a stack, queue, or double-ended queue.

ArrayList: Resizable-array implementation of the List interface. Implements all optional list operations, and permits all elements, including null. In addition to implementing theList interface, this class provides methods to manipulate the size of the array that is used internally to store the list. (This class is roughly equivalent to Vector, except that it is unsynchronized.

Vector: The Vector class implements a growable array of objects. Like an array, it contains components that can be accessed using an integer index. Unlike the new collection implementations, Vector is synchronized.

Stack: The Stack class represents a last-in-first-out (LIFO) stack of objects. It extends class Vector with five operations that allow a vector to be treated as a stack. 

PriorityQueue: An unbounded priority queue based on a priority heap. The elements of the priority queue are ordered according to their natural ordering, or by a Comparatorprovided at queue construction time, depending on which constructor is used. A priority queue does not permit null elements. A priority queue relying on natural ordering also does not permit insertion of non-comparable objects (doing so may result in ClassCastException).

HashSet: This class implements the Set interface, backed by a hash table (actually a HashMap instance). It makes no guarantees as to the iteration order of the set; in particular, it does not guarantee that the order will remain constant over time. This class permits the null element.

 

LinkedHashSet: Hash table and linked list implementation of the Set interface, with predictable iteration order

TreeSet: A NavigableSet implementation based on a TreeMap

 

 

Non-null: 

Hashtable(non-nullKey && non-nullValue), PriorityQueue

TreeMap, TreeSet: only success for first elements, than fail for  later 

 

Offer and Add

Inserts the specified element into this queue if it is possible to do so immediately without violating capacity restrictions. When using a capacity-restricted queue, this method is generally preferable to add(E), which can fail to insert an element only by throwing an exception.


http://www.falkhausen.org/en/diagram/html/java.util.Collection.html

http://java.sun.com/docs/books/tutorial/collections/index.html

 

3. Generic

http://java.sun.com/docs/books/tutorial/java/generics/index.html

 

4. Reflection

4.1 Concept and Basic usage 

In computer sciencereflection is the process by which a computer program can observe and modify its own structure and behavior. The programming paradigm driven by reflection is called reflective programming. It is a particular kind of metaprogramming.

In many computer architectures, program instructions are stored as data - hence the distinction between instruction and data is merely a matter of how the information is treated by the computer and programming language. Normally, instructions are executed and data is processed; however, in some languages, programs can also treat instructions as data and therefore make reflective modifications

Reflection is commonly used by programs which require the ability to examine or modify the runtime behavior of applications running in the Java virtual machine

 

Reflection is the mechanism by which Java exposes the features of a class during runtime, allowing Java programs to enumerate and access a class' methods, fields, and constructors as objects

 

 

4.1.1 Java.lang.reflect

Interface

AnnotatedElement

GenericDeclaration

InvocationHandler

Member

 

 

Classes

AccessObject

Array
Constructor

Field

Method

Modifier

Proxy

ReflectPermission 

 

 

 

Retrieving Class Objects

Object.getClass() : byte[] bytes = new byte[1024] Class c = bytes.getClass();

.class : Class c = int[][][].class Class c = boolean.class

 

Class.forName(): Class cDoubleArray = Class.forName("[D");Class cStringArray = Class.forName("[[Ljava.lang.String;");

 

Discovering class members

MemberClass APIList of members?Inherited members?Private members?
FieldgetDeclaredField()nonoyes
getField()noyesno
getDeclaredFields()yesnoyes
getFields()yesyesno

 

Reflecting Generics

Type 

TypeVaraible

WildcardType

ParameterizedType

GenericArrayType

 

Drawback

 

Reflection is powerful, but should not be used indiscriminately. If it is possible to perform an operation without using reflection, then it is preferable to avoid using it. The following concerns should be kept in mind when accessing code via reflection.

Performance Overhead
Because reflection involves types that are dynamically resolved, certain Java virtual machine optimizations can not be performed. Consequently, reflective operations have slower performance than their non-reflective counterparts, and should be avoided in sections of code which are called frequently in performance-sensitive applications.
Security Restrictions
Reflection requires a runtime permission which may not be present when running under a security manager. This is in an important consideration for code which has to run in a restricted security context, such as in an Applet.
Exposure of Internals
Since reflection allows code to perform operations that would be illegal in non-reflective code, such as accessing  private fields and methods, the use of reflection can result in unexpected side-effects, which may render code dysfunctional and may destroy portability. Reflective code breaks abstractions and therefore may change behavior with upgrades of the platform

 

4.1.2 java.lang.ref

PhantomReference, SoftReference, WeakReference

http://jnb.ociweb.com/jnb/archive/jnbJune2000.html

 

 

Weak reference objects, which do not prevent their referents from being made finalizable, finalized, and then reclaimed. Weak references are most often used to implement 

canonicalizing mappings.

 

Suppose that the garbage collector determines at a certain point in time that an object is weakly reachable. At that time it will atomically clear all weak references to that object and all weak references to any other weakly-reachable objects from which that object is reachable through a chain of strong and soft reference

 

 

Soft reference objects, which are cleared at the discretion of the garbage collector in response to memory demand. Soft references are most often used to implement memory-sensitive caches.

 

 

 

Phantom reference objects, which are enqueued after the collector determines that their referents may otherwise be reclaimed. Phantom references are most often used for scheduling pre-mortem cleanup actions in more flexible way than is possible with the Java finalization mechanism. If the garbage collector determines at a certain point in time that the referent of a phantom reference is phantom reachable, then at that time or at some later time it will enqueue the reference.

 

 

http://thestrangeloop.com/sites/default/files/slides/BobLee_JavaReferences.pdf

 

Some things require manual cleanup.

 

• Listeners

 

• File descriptors

• Native memory

• External state 

Tools at your disposal

• finally

• Overriding finalize 

• References and references queue

 

 

The Levels of Reachability

 

> @since 1.2

> Reference types

 • Soft: for caching

 • Weak: for fast cleanup (pre-finalizer)

 • Phantom: for safe cleanup (post-finalizer):  replace a finalizer

> Reference queues: for notifications

 

 

> Strong

> Soft

> Weak

> Finalizer

> Phantom, JNI weak

> Unreachable

 

WeakReference

 

 

 

accessing a phantom referent

 

4.1.3 javax.lang.model (1.6)

 

http://tutorials.jenkov.com/java-reflection/private-fields-and-methods.html

http://en.wikibooks.org/wiki/Java_Programming/Reflection/Accessing_Private_Features_with_Reflection

 

 

Drawbacks of Reflection

 

http://en.wikipedia.org/wiki/Reflection_(computer_science)

http://www.ibm.com/developerworks/library/j-dyn0603/

http://java.sun.com/docs/books/tutorial/reflect/index.html

 

4.2 practise

4.2.1 AOP

 

 

4.2.2 Annotation

 

 

4.2.3 Reflection Test

 

 

7. Annotation

 

 

6. Java Compiler JIT (http://acme1921209.javaeye.com/blog/59769)

 

 

5. Memory Model (GC)

http://chaoticjava.com/posts/how-does-garbage-collection-work/

http://www.artima.com/insidejvm/ed2/gcP.html

http://www.ibm.com/developerworks/library/j-jtp01274.html

http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html

 

5.1 Garbage Collection Algorithm

Reference counting was an early garbage collection strategy: A disadvantage is that reference counting does not detect cycles: two or more objects that refer to one another. 

 

 

Concurrent Mark-Sweep GC (CMS) Tracing Collectors: Tracing garbage collectors trace out the graph of object references starting with the root nodes. Objects that are encountered during the trace are marked in some way. Marking is generally done by either setting flags in the objects themselves or by setting flags in a separate bitmap. After the trace is complete, unmarked objects are known to be unreachable and can be garbage collected.The basic tracing algorithm is called "mark and sweep." This name refers to the two phases of the garbage collection process. In the mark phase, the garbage collector traverses the tree of references and marks each object it encounters. In the sweep phase, unmarked objects are freed, and the resulting memory is made available to the executing program. In the Java virtual machine, the sweep phase must include finalization of objects.

 

 

Garbage collectors of Java virtual machines will likely have a strategy to combat heap fragmentation. Two strategies commonly used by mark and sweep collectors are compacting and copying. Both of these approaches move objects on the fly to reduce heap fragmentation. Compacting collectors slide live objects over free memory space toward one end of the heap. In the process the other end of the heap becomes one large contiguous free area. All references to the moved objects are updated to refer to the new location.

Updating references to moved objects is sometimes made simpler by adding a level of indirection to object references. Instead of referring directly to objects on the heap, object references refer to a table of object handles. The object handles refer to the actual objects on the heap. When an object is moved, only the object handle must be updated with the new location. All references to the object in the executing program will still refer to the updated handle, which did not move. While this approach simplifies the job of heap defragmentation, it adds a performance overhead to every object access

 

A common copying collector algorithm is called "stop and copy." In this scheme, the heap is divided into two regions. Only one of the two regions is used at any time. Objects are allocated from one of the regions until all the space in that region has been exhausted. At that point program execution is stopped and the heap is traversed. Live objects are copied to the other region as they are encountered by the traversal. When the stop and copy procedure is finished, program execution resumes. Memory will be allocated from the new heap region until it too runs out of space. At that point the program will once again be stopped. The heap will be traversed and live objects will be copied back to the original region. The cost associated with this approach is that twice as much memory is needed for a given amount of heap space because only half of the available memory is used at any time.

 

One disadvantage of simple stop and copy collectors is that all live objects must be copied at every collection. This facet of copying algorithms can be improved upon by taking into account two facts that have been empirically observed in most programs in a variety of languages:

 

  1. Most objects created by most programs have very short lives.
  2. Most programs create some objects that have very long lifetimes. A major source of inefficiency in simple copying collectors is that they spend much of their time copying the same long-lived objects again and again.

 

Generational collectors address this inefficiency by grouping objects by age and garbage collecting younger objects more often than older objects. In this approach, the heap is divided into two or more sub-heaps, each of which serves one "generation" of objects. The youngest generation is garbage collected most often. As most objects are short-lived, only a small percentage of young objects are likely to survive their first collection. Once an object has survived a few garbage collections as a member of the youngest generation, the object is promoted to the next generation: it is moved to another sub-heap. Each progressively older generation is garbage collected less often than the next younger generation. As objects "mature" (survive multiple garbage collections) in their current generation, they are moved to the next older generation.

 

Garbage First Garbage Collector(G1)

 separation between the two generations is basically logical.  So some regions are considered to be young, some old. All space reclamation in G1 is done through copying. G1 selects a set of regions, pick the surviving object from those regions and copy them to another set of regions.

 

http://www.infoq.com/news/2008/05/g1

http://research.sun.com/jtech/pubs/04-g1-paper-ismm.pdf

 

http://jiangyongyuan.javaeye.com/blog/356502


5.2 MemoryModel

http://www.cs.umd.edu/~pugh/java/memoryModel/

http://java.sun.com/docs/books/jls/second_edition/html/memory.doc.html#26250

 

 

TODO

 

8. Socket

8.0 Internet protocol 

 

Application Layer: FTP, SMTP, HTTP, SSH, RPC, SOAP, Telnet, IRC

Transport Layer: TCP, UDP, TSL(SSL), 

Internet Layer : IP, ICMP

Link Layer: ARP, Tunnels, Ethernet, DSL

HTTP (Application)

 

Status Code

1xx informational
2xx success

      ---- 200 OK

      ---- 201 Created

      ---- 202 Accepted

 

3xx redirect

      ---- 302 Found

      ---- 303 See other

      ---- 304 Not modified

      ---- 305 Use proxy

 

4xx client error

      ---- 401 Bad request, such as wrong syntax

      ---- 402 Un-authorized

 

      ---- 403 Forbidden

      ---- 404 Not Found

5xx server error

 

      ---- 500 Internal Error

      ---- 502 Bad Gateway

 

      ---- 503Service Unavailable

 

      ---- 504 Gateway timeout

 

 

http header

An ETag, or entity tag, is part of HTTP, the protocol for the World Wide Web. It is one of several mechanisms that HTTP provides for cache validation

 

https

Hypertext Transfer Protocol Secure (HTTPS) is a combination of the Hypertext Transfer Protocol with the SSL/TLS protocol to provide encryption and secure (website security testing) identification of the server. It uses port 443

http://tomcat.apache.org/tomcat-5.5-doc/ssl-howto.html

 

 

 

TCP (Transport)

 

  1. LISTEN : In case of a server, waiting for a connection request from any remote client.
  2. SYN-SENT : waiting for the remote peer to send back a TCP segment with the SYN and ACK flags set. (usually set by TCP clients)
  3. SYN-RECEIVED : waiting for the remote peer to send back an acknowledgment after having sent back a connection acknowledgment to the remote peer. (usually set by TCP servers)
  4. ESTABLISHED : the port is ready to receive/send data from/to the remote peer.
  5. FIN-WAIT-1
  6. FIN-WAIT-2
  7. CLOSE-WAIT
  8. CLOSING
  9. LAST-ACK
  10. TIME-WAIT : represents waiting for enough time to pass to be sure the remote peer received the acknowledgment of its connection termination request. According to RFC 793a connection can stay in TIME-WAIT for a maximum of four minutes.
  11. CLOSED

 

 

Too many TIME_WAIT

http://blog.csdn.net/william7495/archive/2010/03/30/5430480.aspx

http://stackoverflow.com/questions/813790/too-many-time-wait-connections

http://stackoverflow.com/questions/41602/how-to-forcibly-close-a-socket-in-time-wait

http://hi.baidu.com/%CF%B8%C6%B7%B3%C1%CF%E3/blog/item/db24882f0843293c1f3089cf.html

 

UDP(Transport)

http://www.roseindia.net/java/example/java/net/udp/multicast.shtml

InetAddress address types:

unicast: An identifier for a single interface. 

 

multicast: An identifier for a set of interfaces (typically belonging to different nodes)

 

 

DatagramSocket <- MulticastSocket

 

A multicast group is specified by a class D IP address and by a standard UDP port number. Class D IP addresses are in the range 224.0.0.0 to 239.255.255.255, inclusive. The address 224.0.0.0 is reserved and should not be used

http://java.sun.com/docs/books/tutorial/networking/datagrams/broadcasting.html

 

 

IP (Internet)

Historical classful network architecture
ClassFirst octet in binaryRange of first octetNetwork IDHost IDNumber of networksNumber of addresses
A0XXXXXXX0 - 127ab.c.d27 = 128224-2 = 16,777,214
B10XXXXXX128 - 191a.bc.d214 = 16,384216-2 = 65,534
C110XXXXX192 - 223a.b.cd 221 = 2,097,15128-2 = 254

IPV4 subnetting

http://en.wikipedia.org/wiki/IP_address#IPv4_subnetting

 

 

The Address Resolution Protocol (ARP) is a computer networking protocol for determining a network host's link layer or hardware address when only its Internet Layer (IP) or Network Layer address is known. This function is critical in local area networking as well as for routing internetworking traffic across gateways (routers) based on IP addresses when the next-hop router must be determined. Generally, the aim is to associate the attacker's MAC address with the IP address of another node

 

Digital Subscriber Line (DSL) is a family of technologies that provides digital data transmission over the wires of a local telephone network

Ethernet is a family of frame-based computer networking technologies for local area networks (LANs).It defines a number of wiring and signaling standards for the Physical Layer

 

DNS(Domain Name System)

The Domain Name System distributes the responsibility of assigning domain names and mapping those names to IP addresses by designating authoritative name servers for each domain. Authoritative name servers are assigned to be responsible for their particular domains, and in turn can assign other authoritative name servers for their sub-domains. This mechanism has made the DNS distributed and fault tolerant and has helped avoid the need for a single central register to be continually consulted and updated.

http://en.wikipedia.org/wiki/Root_nameserver

 

TTL(TimeToLive): Time to live (sometimes abbreviated TTL) is a limit on the period of time or number of iterations or transmissions in computer and computer network technology that a unit of data (e.g. a packet) can experience before it should be discarded, such as IP Package, DNS records

 

MSL(maximum segment lifetime, 120 seconds): Maximum Segment Lifetime is the time a TCP segment can exist in the internetwork system. 

 

 

 

 

SMTP is specified for outgoing mail transport and uses TCP port 25.

 

10. IO

http://www.falkhausen.de/en/diagram/html/java.io.Reader.html

http://www.falkhausen.de/en/diagram/html/java.io.Writer.html

 

http://www.falkhausen.de/en/diagram/html/java.io.InputStream.html

http://www.falkhausen.de/en/diagram/html/java.io.OutputStream.html

 

Serializable

* implements  Serializable

* ObjectOutputStream.writeObject ObjectInputStream.readObject

* private void writeObject(ObjectOutputStream os)

          os.defaultWriteObject();

           ...

  private void readObject(ObjectInputStream is)

           is.defaultReadObject();

            ...

* Externalizable 

   public void readExternal(ObjectInput in)

   public void writeExternal(ObjectOutput out)

 

Serializable 

* if object has reference data member, that the reference type also need to be serializable or declare it as transient

* if Parent is serializable, child is serializable. instanceof can pass

* if Child is serializable, while parent not, than a default constuctor must be provided for Parent 

* Different JDK version may be has different serialize format 

 

During deserialization, the fields of non-serializable classes will be initialized using the public or protected no-arg constructor of the class. A no-arg constructor must be accessible to the subclass that is serializable. The fields of serializable subclasses will be restored from the stream.

11. OO

13. UML

http://en.wikipedia.org/wiki/Unified_Modeling_Language#Structure_diagrams

http://en.wikipedia.org/wiki/Class_diagram

 

 

14.  Database

In the field of relational database design, normalization is a systematic way of ensuring that a database structure is suitable for general-purpose querying and free of certain undesirable characteristics—insertion, update, and deletion anomalies—that could lead to a loss of data integrity

Normal formDefined byBrief definition
First normal form (1NF)Two versions: E.F. Codd (1970), C.J. Date (2003)[12]Table faithfully represents a relation and has no repeating groups
Second normal form(2NF)E.F. Codd (1971)[13]No non-prime attribute in the table is functionally dependent on a part (proper subset) of a candidate key
Third normal form (3NF)E.F. Codd (1971)[14]; see +also Carlo Zaniolo's equivalent but differently-expressed definition (1982)[15]Every non-prime attribute is non-transitively dependent on every key of the table
Boyce-Codd normal form (BCNF)Raymond F. Boyce and E.F. Codd (1974)[16]Every non-trivial functional dependency in the table is a dependency on a superkey

 

Denormalization is the process of attempting to optimize the read performance of a database by adding redundant data or by grouping data[1][2]. In some cases, denormalization helps cover up the inefficiencies inherent in relational database software. A relational normalized database imposes a heavy access load over physical storage of data even if it is well tuned for high performance.

 

Index architectures can be classified clustered or unclustered.

There are clustered and nonclustered indexes. A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.


A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows

http://www.itwis.com/html/database/sqlserver/20090611/4602.html

 

 

http://en.wikipedia.org/wiki/SQL

  • The FROM clause which indicates the table(s) from which data is to be retrieved. The FROM clause can include optional JOIN subclauses to specify the rules for joining tables.
  • The WHERE clause includes a comparison predicate, which restricts the rows returned by the query. The WHERE clause eliminates all rows from the result set for which the comparison predicate does not evaluate to True.
  • The GROUP BY clause is used to project rows having common values into a smaller set of rows. GROUP BY is often used in conjunction with SQL aggregation functions or to eliminate duplicate rows from a result set. The WHERE clause is applied before the GROUP BY clause.
  • The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate.
  • The ORDER BY clause identifies which columns are used to sort the resulting data, and in which direction they should be sorted (options are ascending or descending). Without anORDER BY clause, the order of rows returned by an SQL query is undefined.

http://en.wikipedia.org/wiki/Join_(SQL)

 

 

12 JDK History

 

 

http://robaustin.wikidot.com/jvm-garbage-collector-overview

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

FireCoder

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值