20240917learning summary

So...Today is the last day of the mid-autumn festival, just the day before the term begins, and I go to the library to fix my mental status so as to adapt to the tense environment of skd. I am writing this summary cauze I want to parctice English writing skills.

Well, what I learnt?

First, algorithnm.On things like BST, below which has AVL and Splay, which are all designed to make the height of the tree lower. But unlike AVL, which stress on the balance of the tree, Splay emphasizes the convience of searching ( by putting the element in search or its parent to the root position ).Both of which involves transformation of the tree, but all remain  the descanding sequence in inorder traversal, besides their time complexity are all logn. After an element in Splay is found, it involved parent and grandparent to change their positon --first grandparent next parent. In such a clever way, the tree height can decrease step by step. All this processes are involved in the function searchin(). And the function insert and delete all call this function.

Next, its the learing algorithm with Carl. First exercise is to reverse words and delete the redundant blaankspaces in the sentence. The idea is : just first delete the redundant blankspaces[By searching the beginning (while),the middle(for) and the end(resize)] ,the reverse all the chars, and finally reverse the individual word. Finding individual word by setting flags(entry),start and end of the word.

Also learnt something about the KMP algorithm. It helps to seach the identical parts between 2 strings, but if encounter the wrong char, it immediately turn to the nearest identical char and go on to search (using prefix table). The prefix table help to find the length of common prefix and postfix string. And using next array to position.The code of function getnext is a process of comparing post and prefix. i moves on and j step back if s[i]!=s[j],else j++. and next[i]=j. 

Keras is an open-source neural network library written in Python. It is designed to enable fast experimentation with deep neural networks and easy deployment to production. Keras provides a high-level API for building and training deep learning models. The Keras summary method is used to provide a summary of the model architecture, including the number of parameters and the output shape of each layer. This summary can be useful for debugging and optimizing the model, as well as understanding its structure and behavior. The Keras summary method takes an optional argument called "line_length," which specifies the maximum length of each line in the summary output. If the line length is too short, the summary may be split across multiple lines, making it difficult to read. If the line length is too long, the summary may become too wide to fit on the screen. To use the Keras summary method, first create a Keras model by defining its layers and compiling it with an optimizer and loss function. Then, call the summary method on the model object: ``` from keras.models import Sequential from keras.layers import Dense # Define a simple Keras model model = Sequential() model.add(Dense(64, activation='relu', input_dim=100)) model.add(Dense(1, activation='sigmoid')) # Compile the model with an optimizer and loss function model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # Print a summary of the model architecture model.summary() ``` The output of the summary method will look something like this: ``` Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 64) 6464 _________________________________________________________________ dense_2 (Dense) (None, 1) 65 ================================================================= Total params: 6,529 Trainable params: 6,529 Non-trainable params: 0 _________________________________________________________________ ``` This summary shows that the model has two layers, one with 64 neurons and one with 1 neuron, and a total of 6,529 parameters. It also shows the output shape of each layer, which is (None, 64) for the first layer and (None, 1) for the second layer. Finally, it shows the total number of trainable parameters and non-trainable parameters in the model.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值