【无标题】

Report

Task 1

First of all, we need to summarize the methods and classes that need to be implemented according to the requirements of task1: LinkedList body class, used to store String, Node class, add and delete method, empty method, return list size method, add method to the header method, pay attention to the implementation of error handling mechanism.

To regulate development, we can start by creating an interface class that regulates our methods and inner classes.

public interface List {

    interface Node {
        Object getValue();
        void SetValue(Object value);
        Node getPre();
        Node getNext();
    }

    // get the length of LinkedList
    long length();
    // return the head node
    Node first();
    // return the tail node
    Node last();
    // Add a node by inserting the head
    List addNodeHead(Object value);
    // Add a node by inserting the tail
    List addNodeTail(Object value);
    // search specific elements
    Node searchKey(Object key);
    // check whether the linkedList is empty
    String isEmpty();
    // Delete specified node
    void delNode(Object key);

//  Clear list

void release();
}

Since the time complexity of the linkedlist header interpolation method and the tail interpolation method are both O(1), and the implementation is similar, we can implement them together. Please refer to the comments for specific functions of each method.

The next step is to create a new LinkedList class to implement our predefined interface. The main parameters of LinkedList are as follows:

// head node

private ListNode head;



// tail node

private ListNode tail;



// list size

private long len;

Their getter will not be detailed, please refer to the source code and source comments.

Since the main requirement of the topic is the head insertion method, here is mainly to the realization and explanation of the head insertion method, and the tail insertion method please refer to the source code.

public List addNodeHead(Object value) {

    ListNode node = new ListNode();

    // keep the value pointer

    node.value = value;



    // insert

    if (this.len == 0) {

        this.head = this.tail = node;

        node.prev = node.next = null;

    } else {

        node.prev = null;

        node.next = this.head;

        this.head.prev = node;

        this.head = node;

    }



    this.len++;

    return this;

}

First, save the value to be inserted into the newly created node object, and then determine the status of the current linkedlist. When its length is 0, insert the prefix node and the post node that need to define the header node, the tail node and the current node. When its length is not zero, only the prefix node (null) is defined, the post node (the head node before insertion) is defined, and the head pointer is pointed to the current node.

The test method is as follows:

public void testAddNodeHead() {

    LinkedList linkedList = new LinkedList();

    linkedList.addNodeHead(1);

    linkedList.addNodeHead(2);

    linkedList.addNodeHead(3);

    linkedList.addNodeHead("s");

    linkedList.addNodeHead('a');

    linkedList.addNodeHead(false);

    linkedList.addNodeHead(1.1);



    List.Node currentNode = linkedList.first();

    while(currentNode != null){

        System.out.print(currentNode.getValue() + "\t");

        currentNode = currentNode.getNext();

    }

}

The results are as follows:

The same tailgating test code is as follows:

The same tailgating test code is as follows:public void testAddNodeTail() {

    LinkedList linkedList = new LinkedList();

    linkedList.addNodeTail(1);

    linkedList.addNodeTail(2);

    linkedList.addNodeTail(3);

    linkedList.addNodeTail("s");

    linkedList.addNodeTail('a');

    linkedList.addNodeTail(false);

    linkedList.addNodeTail(1.1);



    List.Node currentNode = linkedList.first();

    while(currentNode != null){

        System.out.print(currentNode.getValue() + "\t");

        currentNode = currentNode.getNext();

    }

}

The results are as follows:

The next thing I want to do is access a specific node, use a while loop to do this.

@Override

public Node searchKey(Object key) {

    ListNode current = this.head;

    while(current != null){

        if (current.getValue().equals(key)){

            return current;

        }

        current = (ListNode) current.getNext();

    }

    throw new RuntimeException("no such Node");

}

This method returns the first value found equal to the node in which the parameter is being sought. If you don't find it, throw the exception "no such Node"

The test code is as follows:

public void testSearchKey() {

    LinkedList linkedList = new LinkedList();

    linkedList.addNodeTail(1);

    linkedList.addNodeTail(2);

    linkedList.addNodeTail(3);

    linkedList.addNodeTail("s");

    linkedList.addNodeTail('a');

    linkedList.addNodeTail(false);

    linkedList.addNodeTail(1.1);





    List.Node currentNode = linkedList.first();

    while(currentNode != null){

        System.out.print("the node is: " + currentNode+ " the value is: " + currentNode.getValue() + "\n");

        currentNode = currentNode.getNext();

    }

    System.out.println(linkedList.searchKey(2));

    System.out.println(linkedList.searchKey(12));

}

Since 2 exists and 12 does not exist, all 2 can be found, but the failure of 12 will throw an exception on the last line of code.

The test results are as follows:

From the figure, we can see that the memory address found is the same as 2, so the method can run successfully. But when we run to find 12, we throw an unfound exception, just like our estimate.

For the method that returns the size of the linkedlist, we can get the size by calling the getter of the length parameter in the LinkedList.

@Override

public long length() {

    return this.len;

}
Test code:

public void testTestLength() {

    LinkedList linkedList = new LinkedList();

    linkedList.addNodeTail(1);

    linkedList.addNodeTail(2);

    linkedList.addNodeTail(3);

    System.out.println(linkedList.length());

}

result:

The next implementation is a method to determine whether the linkedlist is null:

@Override

public String isEmpty() {

    if (this.head != null && len != 0){

        return "this linkedlist is not null.";

    }

    throw new RuntimeException("this is an empty linkedlist.");

}

Because of the need to impose error handling and reminders, you can only define a String method if the header node is not empty and the linkedlist length is not 0.

The test code is as follows:

public void testIsEmpty() {

    LinkedList linkedList = new LinkedList();

    try {

        System.out.println(linkedList.isEmpty());

    } catch (Exception e) {

        System.out.println("the error is: " + e);

    }

    linkedList.addNodeTail(1);

    System.out.println(linkedList.isEmpty());

}

Exception catching is used here, because when creating a linked list, memory space is not allocated until we add a node to it, so the first print will get an error, and then we will get a non-empty result after we add a node.

Next we implement the deletion of the node with the specified value

@Override

public void delNode(Object key) {

    ListNode aimNode = (ListNode) searchKey(key);

    ListNode prev = aimNode.prev;

    ListNode next = aimNode.next;

    prev.next = aimNode.next;

    next.prev = aimNode.prev;

    len--;

}

Here, we directly use the searchKey method to find, which can also omit the error handling mechanism. Deleting a linked list is a simple change of the pointer to the front node and the pointer to the front node.

The test code is as follows:

public void testDelNode() {

    LinkedList linkedList = new LinkedList();

    linkedList.addNodeTail(1);

    linkedList.addNodeTail(2);

    linkedList.addNodeTail(3);

    linkedList.addNodeTail("s");

    linkedList.addNodeTail('a');

    linkedList.addNodeTail(false);

    linkedList.addNodeTail(1.1);



    List.Node currentNode = linkedList.first();

    linkedList.delNode(2);

    while(currentNode != null){

        System.out.print("the node is: " + currentNode+ " the value is: " + currentNode.getValue() + "\n");

        currentNode = currentNode.getNext();

    }

}

The node whose value is 2 is successfully deleted.

Next, I will describe the implementation of my node class. I use static inner class to implement the node node class, because the static attribute is the class attribute, and the class attribute stored in the method area will be loaded at the same time when the class is loaded. Therefore, our inner class will be created together with the linkedlist class, instead of allocating space for class loading when the instantiated object is created. Reduce runtime overhead. At the same time, I have the value attribute as the top-level Object class, which means that I have linked lists that can store any type of data, not just the string type.

// creat node

private static class ListNode implements List.Node {

    /**

     *  Pre-node

     *  post-node

     *  node value

     */

    ListNode prev;

    ListNode next;

    Object value;



    @Override

    public Object getValue() {

        return this.value;

    }



    @Override

    public void SetValue(Object value) {

        this.value = value;

    }



    @Override

    public Node getPre() {

        return this.prev;

    }



    @Override

    public Node getNext() {

        return this.next;

    }

}

What remains is some test cases and test results for other methods:

public void testFirst() {

    LinkedList linkedList = new LinkedList();

    try {

        System.out.println(linkedList.first().getValue());

    } catch (Exception e) {

        System.out.println("the error is: " + e);

    }

    linkedList.addNodeHead('b');

    System.out.println(linkedList.first().getValue());

}



public void testLast() {

    LinkedList linkedList = new LinkedList();

    try {

        System.out.println(linkedList.last().getValue());

    } catch (Exception e) {

        System.out.println("the error is: " + e);

    }

    linkedList.addNodeHead('a');

    linkedList.addNodeTail('b');

    // the output should be b

    System.out.println(linkedList.last().getValue());

}

And then there's the release method

@Override

public void release() {

    this.head = null;

}

Simply leave the head node empty and then automatically reclaim the excess nodes and objects through the jvm's gc mechanism.

Task2

task2 requires us to create a hash table that uses linear probing to resolve hash conflicts, and also requires adding and deleting methods, and must print out the load factor during adding and deleting. To do this quickly, I will no longer use interfaces to define canonical methods and store constant values.

The first is my Entry object, which lives as an internal class in the LinearProbingHashTable class.

private static class Entry {

    String key;

    String value;



    Entry(String key, String value) {

        this.key = key;

        this.value = value;

    }

}

Then there are the parameters of the LinearProbingHashTable, an Entry array that holds the entry object, size representing the capacity that has been used, capacity representing the total capacity, and used to calculate the load factor, which I defined as 0.4 to facilitate hash conflicts for linear probing. Expand capacity.

private Entry[] table;

private int size;

private int capacity;

private double loadFactorThreshold = 0.4;

The next step is to create the constructor. We should override the constructor to create a constructor that can specify the capacity, and a constructor that does not specify the capacity and creates the default size hashtable. Reading the source code for Java hashtable, we can see that the default size is 11.

public LinearProbingHashTable(int capacity) {

    this.capacity = capacity;

    this.table = new Entry[capacity];

    this.size = 0;

}

public LinearProbingHashTable() {

    this.capacity = 11;

    this.table = new Entry[capacity];

    this.size = 0;

}

Next is the generation method of hashCode. We use hashCode as the subscript of our Entry array to store the corresponding entry array in the array. So in order to place the entry in the hashtable with the initial capacity of 11, we need a reasonable hash function. Instead of getting an index subscript that is larger than capacity, it leads to direct expansion.

private synchronized int hashFunction(String key) {

    // Simple hash function for string keys

    int hash = 0;

    for (char c : key.toCharArray()) {

        hash = (hash * 31 + c) % capacity;

    }

    return hash;

}

Next talk about the check and delete method, because the delete method needs to find the key and then delete, so it is equivalent to the check method, check the code is not given, please refer to the source code implementation.

public synchronized void delete(String key) {

    int index = hashFunction(key);

    while (table[index] != null) {

        if (table[index].key.equals(key)) {

            table[index] = null;

            size--;

            return;

        }

        index = (index + 1) % capacity;

    }

    displayLoadFactor();

}

The code here:

index = (index + 1) % capacity;

Essentially, you move the pointer to the next array position.

The test code is as follows:

The test results are as follows:

We can see that the entry with key 10 has been deleted.

The same search method returns null if there is no key and its corresponding value if there is one:

Then introduce our resize method, because in the use of linear probe method, conflicts will be expanded, so it will lead to the use of array space quickly, when our load factor reaches our specified value (here is 0.4) will be expanded, check the hashtable source code we know that each expansion is 2n+1 times the original. This expansion, which we call rehash, is to create an array with the updated capacity and then move the object from the old hash slot to the new one.

private synchronized void resize() {

    // Resize the table when load factor exceeds the threshold

    // Double the capacity and rehash all entries

    capacity = 2 * capacity + 1;

    Entry[] oldTable = table;

    table = new Entry[capacity];

    size = 0;



    for (Entry entry : oldTable) {

        if (entry != null) {

            insert(entry.key, entry.value);

        }

    }

}

We will test this method together with the insert method.

Next we introduce the insert method. We need to determine if key is null. From Java source code, we know that hashtable key and value cannot be null in Java, because hashtable is thread-safe, that is, each method must take the synchronized argument. The whole table is locked every time the table is added, deleted, modified, and checked, so the efficiency of hashtable is extremely low. hashmap is more commonly used in Java, because hashmap has a better tree mechanism area to solve hash conflicts. Moreover, hashmap is not thread-safe and does not need to obtain locks to modify the table. So it's substantially more efficient and powerful than hashtable, and if we're thinking thread-safe, we should use currenthashmap in Java, because even in JDK pre-1.7, currenthashmap has segment locks, that is, locks parts of the underlying array of objects. If two threads do not access the same segment, there will be no blocking wait. After jdk1.8, currenthashmap was optimized and the segment lock was changed to node lock, which can be considered as tree node and list node lock, which is more flexible. So in Java hashtable, like vector in list, is an ancient implementation of some feature that was deprecated a long time ago. Because hashtable is thread-safe, key and value are not allowed to be null. If they are null, there will be ambiguity. When a thread performs a query and returns null, is this entry null? Or is it because its value is null? Likewise, null values and keys are not allowed in currenthashmap.

public synchronized void insert(String key, String value) {

    if (key == null) {

        throw new IllegalArgumentException("Key cannot be null");

    }



    if ((double)(size + 1) / capacity > loadFactorThreshold) {

        resize();

    }



    int index = hashFunction(key);

    while (table[index] != null) {

        if (table[index].key.equals(key)) {

            // Key already exists, update the value

            table[index].value = value;

            return;

        }

        index = (index + 1) % capacity;

    }



    table[index] = new Entry(key, value);

    size++;

    displayLoadFactor();

}

The test code is as follows:

public void testInsert() {



    table.insert("1", "2");

    table.insert("3", "4");

    table.insert("22", "33");

    table.insert("32", "33");

    table.insert("2", "34");

    table.insert("4", "34");

    table.insert("5", "34");

    table.insert("6", "34");

    table.insert("7", "34");

    table.insert("8", "34");

    table.insert("9", "34");

    table.insert("10", "34");

    table.insert("11", "34");

    table.insert("12", "34");

    table.insert("13", "34");

    table.insert("14", "34");

    table.insert("15", "34");

    System.out.println("the capacity is: " + table.getCapacity());

    table.insert("16", "34");

    System.out.println("the capacity is: " + table.getCapacity());





}

According to the following running results, we can find that the capacity can be successfully expanded.

Then is the code to obtain the load factor and print the load factor code, too simple no longer described.

public synchronized double getLoadFactor() {

    return (double) size / capacity;

}



public synchronized void displayLoadFactor() {

    System.out.println("Load Factor: " + getLoadFactor());

}

Then I added a traversal print method that prints all the key-values for testing in the test case.

public synchronized void printAll() {

    for (Entry entry : table) {

        if (entry != null) {

            System.out.println("Key: " + entry.key + ", Value: " + entry.value);

        }

    }

}

Task3

Next I will introduce the hash table that uses the zipper method to solve the conflict problem.

The node is defined as follows, replacing the entry of the previous hashtable with a node. Since it is a single linked list, we have added a pointer to the next node next:

private static class Node {

    String key;

    String value;

    Node next;



    Node(String key, String value) {

        this.key = key;

        this.value = value;

    }

}

Then again we define two constructors for directly creating objects with capacity and objects without capacity. The same necessary hashtable parameters and getters are then used for testing.

private Node[] table;

private int size;

private int capacity;

private double loadFactorThreshold = 0.75;



public ChainingHashTable(int capacity) {

    this.capacity = capacity;

    this.table = new Node[capacity];

    this.size = 0;

}



public int getCapacity() {

    return capacity;

}



public ChainingHashTable() {

    this.capacity = 11;

    this.table = new Node[capacity];

    this.size = 0;

}

Since our hash function is optimized enough to get our ideal hashcode, we reduced the load factor in task2 to expand it, so our hash method does not need to be modified, and it already satisfies our test data usage and reduces hash conflicts.

private int hashFunction(String key) {

    // Custom hash function for string keys

    int hash = 0;

    for (char c : key.toCharArray()) {

        hash = (hash * 37 + c) % capacity;

    }

    return hash;

}

Next I'll talk more about my insert method. Since we need to use zipper method to handle hash conflicts, ke is not allowed to be null as in task2. To ensure thread safety, all methods of hashtable must be synchronized with heavy locks to ensure thread safety.

Similarly, if the load factor is greater than the 0.75 set by us, the capacity expansion will be expanded. The expansion method will be introduced later. Then we still use the hash method to get hashcode as the index, if our hash slot is null, then we need to assign the new node directly to it, and set our flag newSlot to true (since our load factor is calculated based on the ratio of hash slot usage to the total capacity, It is not the ratio of nodes to the total capacity, which means that the nodes on the zipper list cannot be used as a parameter, and the load factor value will only be affected if the new hash slot is used or the node in the hash slot is 0).

If table[index] is not null, there are two cases. The first case is that the current hash slot stores a value. In this case, according to the hashtable source code, we need to use equals method to determine whether the key pair we insert is completely consistent with a certain node of the linked list. We will output a message indicating that the same node already exists, and then return directly. If the key is consistent and the value is inconsistent, then we need to do an update operation, that is, the current.value is directly assigned to value, and then exit the loop. Note that we created a count counter to count the number of nodes in the linked list. If the number of nodes is less than 8 and the hash value of the current key is equal to the hash value of the key we entered, we simply add the current node to the end of the linked list. If the number of nodes is greater than 8 or the hash value of current.key is not equal to the hash value of the key we inserted (why this judgment is made I will explain later), linear search is performed, and if there are other hash slots on this array that store the hash value equal to the hash value of the key we want to insert, We check if its length is less than 8, if it is less than 8 we continue to insert, if it is greater than 8 we continue to look, if not we use linear probing to find a null hash slot to insert our value.

It is precisely because I use the strategy of linear search that I need to test hashcode in the first place. Similarly, when the initial judgment key is not equal, we may have a situation where the hash slot has been occupied in a previous linear search, so this logic will be directly entered into the else code block for acceptance in the following processing. A subsequent linear search is then performed to find another empty hash slot as a storage location. The reason for doing so is to avoid data loss caused by linked lists greater than 8, but the cost of doing so is relatively large, because if we have too many hash conflicts, many linked lists are more than 8, then many other hash slots will be occupied, so the data storage is actually very chaotic. Therefore, we need a more optimized hash method to avoid hash conflicts, or remove the restriction on the length of the linked list as in the hashtable source code, or use the tree mechanism like hashmap. When the array length is greater than 64 and the length of the linked list is greater than 8, the linked list will be converted into a red-black tree, which not only ensures the query efficiency (the linked list is a linear search, the list is a linear search, the list is a linear search. The red-black tree is O(nlogn)). However, in general, we do not insert too much data to tens of thousands, and our hash method is OK, so we can use the current expansion mechanism, that is, the combination of zipper method and linear search, to deal with excessive hash conflicts.

public synchronized void insert(String key, String value) {

    boolean newSlot = false;

    if (key == null) {

        throw new IllegalArgumentException("Key cannot be null");

    }



    if ((double)(size + 1) / capacity > loadFactorThreshold) {

        resize();

    }



    int index = hashFunction(key);

    Node newNode = new Node(key, value);



    if (table[index] == null) {

        table[index] = newNode;

        newSlot = true;

    } else {

        Node current = table[index];

        int count = 0;

        while (current != null) {

            if (current.key.equals(key)) {

                if (current.value.equals(value)){

                    System.out.println("The node with key " + key + " already exists.");

                    return// If the same key value is found, nothing is done

                }

                // If the key is consistent and the value is inconsistent, change the value

                current.value = value;

                return;

            }

            count++;

            current = current.next;

        }

        // If the number of nodes is less than eight,

        // the zipper method is used when a hash conflict occurs

        if (count <= 8 || hashFunction(current.key) == hashFunction(key)) {

            current = table[index];

            while (current.next != null) {

                current = current.next;

            }

            current.next = newNode;

        } else {

            // If it's greater than 8, use linear probing

            boolean found = false;

            int probeIndex = index;

            while (!found) {

                probeIndex = (probeIndex + 1) % capacity;

                if (table[probeIndex] == null || !(hashFunction(table[probeIndex].key)==hashFunction(key))) {

                    continue// Skip empty slots

                }

                current = table[probeIndex];

                count = 0;

                while (current != null) {

                    count++;

                    current = current.next;

                }

                // If a previously created slot with less than 8 nodes is found, add the new node there

                if (count <= 8) {

                    current = table[probeIndex];

                    while (current.next != null) {

                        current = current.next;

                    }

                    current.next = newNode;

                    found = true;

                }

                // If the entire table is traversed without finding a suitable slot, create a new one

                if (probeIndex == index) {

                    while (table[probeIndex] != null) {

                        probeIndex = (probeIndex + 1) % capacity;

                    }

                    table[probeIndex] = newNode;

                    newSlot = true;

                    found = true;

                }

            }

        }

    }

    if (newSlot) {

        size++;

    }

    displayLoadFactor();

}

The test code is as follows, Since I'm using the assert method for ease of testing, I'm also using the for loop for stress testing:

ChainingHashTable ht = new ChainingHashTable();





public void testInsert() {

    // Test inserting a single key-value pair

    System.out.println(ht.getCapacity());

    ht.insert("key1", "value1");

    assert(ht.search("key1").equals("value1"));



    // Test inserting multiple key-value pairs

    ht.insert("key2", "value2");

    ht.insert("key3", "value3");

    assert(ht.search("key2").equals("value2"));

    assert(ht.search("key3").equals("value3"));



    // Test inserting a key-value pair with an existing key

    ht.insert("key1", "value4");

    assert(ht.search("key1").equals("value4"));



    // Test inserting a key-value pair that triggers a resize

    for (int i = 4; i <= 100; i++) {

        ht.insert("key" + i, "value" + i);

        System.out.println(ht.getCapacity());



    }

    assert(ht.search("key100").equals("value100"));

}

Result:

(This is only part of the test results, please refer to the test code for details)

Then there is our query method, the query method directly traverses the underlying array and the linked list corresponding to each hash slot, and finds that the use of equals judgment wants to wait, and returns value.

public synchronized String search(String key) {

    for (int i = 0; i < capacity; i++) {

        Node current = table[i];

        while (current != null) {

            if (current.key.equals(key)) {

                return current.value;

            }

            current = current.next;

        }

    }

    return null// Return null if key not found

}

The test code is as follows:

public void testSearch() {

    // Create a new hash table and insert some key-value pairs

    ht.insert("key1", "value1");

    ht.insert("key2", "value2");

    ht.insert("key3", "value3");



    // Test searching for an existing key

    assert(ht.search("key1").equals("value1"));

    // Test searching for a non-existing key

    assert(ht.search("key4") == null);

}

result:

The delete method uses two Pointers, one pre points to the previous node, and one current points to the current node. To delete the pre, just point the next of the pre to the next of the current.

public synchronized void delete(String key) {

    for (int i = 0; i < capacity; i++) {

        Node current = table[i];

        Node prev = null;

        while (current != null) {

            if (current.key.equals(key)) {

                if (prev == null) {

                    table[i] = current.next;

                    if (table[i] == null) {

                        size--;  // Only decrease size if the list becomes empty

                    }

                } else {

                    prev.next = current.next;

                }

                return;

            }

            prev = current;

            current = current.next;

        }

    }

    displayLoadFactor();

}

The test code is as follows:

public void testDelete() {

    // Create a new hash table and insert some key-value pairs

    ht.insert("key1", "value1");

    ht.insert("key2", "value2");

    ht.insert("key3", "value3");



    // Test deleting an existing key

    ht.delete("key2");

    assert(ht.search("key2") == null);



    // Test deleting a non-existing key

    ht.delete("key4");

    assert(ht.search("key4") == null);



    // Test deleting a key that has already been deleted

    ht.delete("key2");

    assert(ht.search("key2") == null);

}

result:

The next step is the resize method, which creates a new array and then copies the contents of the hash slot into the new array. The new addition to task2 is the newSize counter for calculating the load factor.

private synchronized void resize() {

    capacity = capacity * 2 + 1;

    Node[] oldTable = table;

    table = new Node[capacity];

    int newSize = 0;



    for (Node node : oldTable) {

        Node current = node;

        while (current != null) {

            int index = hashFunction(current.key);

            if (table[index] == null) {

                newSize++;

            }

            insert(current.key, current.value);

            current = current.next;

        }

    }

    size = newSize;

}

The test code for the current method can see the successful scaling in the previous insert test results.

Task4

I have written the test of each method in the previous test, I used to create maven project, and then add junit3 plug-in to use junit to test, which is more convenient and more composite Java development specifications. In the main class, since my methods are well encapsulated, in order to test calling the encapsulated method, I can't call my methods directly as I did in the junit test class. Instead, I need to change my accessflags to change hashFunction to public. Change Node from private to protected and then have Main inherit the ChainingHashTable class to get the encapsulated Node inner classes. And because my hashFunction design is too difficult to get two strings with the same hashcode, I simplified it to make it easier to see the chain solution

public int hashFunction(String key) {

        // Custom hash function for string keys

        int hash = 0;

//        for (char c : key.toCharArray()) {

//            hash = (hash * 37 + c) % capacity;

//        }

        for (char c : key.toCharArray()) {

            hash =  (hash + c) % capacity;

        }

        return hash;

    }

The test code is as follows:

System.out.println("____________________________");

ht.insert("abc", "1");

ht.insert("acb", "2");

ht.insert("bac", "3");

ht.insert("bca", "4");

ht.insert("cab", "5");

ht.insert("cba", "6");
 
int code = ht.hashFunction("abc");

ChainingHashTable.Node[] table1 = ht.getTable();

Node current = table1[code];

while(current != null){

    System.out.println("the key is: " + current.getKey()

            + "\tthe value is: " + current.getValue());

    current = current.getNext();

}

Then we successfully obtain a table[index], that is, all the nodes on the linked list in a hash slot with their keys and values, which proves that our chain method is feasible

Test code:

LinearProbingHashTable table = new LinearProbingHashTable();



table.insert("abc", "1");

table.insert("acb", "2");

table.insert("bac", "3");

table.insert("bca", "4");

table.insert("cab", "5");

table.insert("cba", "6");

table.printAll();

LinearProbingHashTable.Entry[] table2 = table.getTable();

int code1 = table.hashFunction("abc");

LinearProbingHashTable.Entry entry1 = table2[code1 + 1];

LinearProbingHashTable.Entry entry = table2[code1];

System.out.println("the key is: " + entry.getKey()

        + "\tthe value is: " + entry.getValue());

System.out.println("the key is: " + entry1.getKey()

        + "\tthe value is: " + entry1.getValue());

Again, as in the previous test, we need to change some access flags of the encapsulated class, and then simplify the hashFunction. After we print, we find that a hash slot has only one value, and neighboring hash slots hold other hashcode equivalent values

From the test case, we can know that among the two methods, the linear probe method is to solve hash conflicts, and the linear search method needs to find the new null space backward, so it needs to occupy more space than the zipper method, and the data storage is more chaotic than the zipper method we implemented. Because when you use hashcode as the index to find the value in the array, you may not get the key corresponding to the hashcode, which is already occupied. The zipper rule is more convenient, but the zipper traversal is a linear traversal, and as I said before, it can be solved using a tree method like hasmap, the number of nodes is greater than 8, the array length is greater than 64 to convert the linked list into a red-black tree. According to traversal, zipper method is not as fast as linear search method, because not only array traversal but also linked list traversal. But all in all, what I realized in this coursework is that zipper method is superior to linear search method. My zipper method combines linear search method to solve the processing scheme when the list node is greater than 8, but hashtable has no limit of 8 nodes in terms of reference source code, and as an ancient implementation of hashtable, The application of hashtable is also rare in reality. The zipper method combined with tree and excellent hash method is the best way to solve hash conflicts in java.

  • 8
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Benaso

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值