- 博客(124)
- 收藏
- 关注
原创 基于C++标准库实现定时器类
一般的Timer的接口都是支持一次性(alarm once)和周期性(alarm period)两种模式的,python的threading.Timer接口显然只支持alarm once模式, 虽然也可以通过在alarm once回调函数里再次创建timer的方式等价的实现alarm period模式, 不过我最近参考。接下来给出条件变量版本的C++的Timer类实现,如果C版本的代码看得懂,那C++版本的代码就可以说是一目了然,毕竟C++的版本是“直译”过来的。
2024-06-28 15:07:16 1636
原创 如何判断一个js对象是否存在循环引用
在前端是我们常用的一个方法,可以将一个对象序列化。例如将如下对象序列化我们发现上面对象是可以使用JSON.stringfy序列化的。personowner我们对上面这个对象进行JSON.stringfy,结果如下,会报错:我们发现他说不能转化一个“圈结构体为JSON”,是因为这个对象的owner属性指向了自己。在转化的时候会变成死循环。
2024-06-28 14:57:21 377
原创 Testimonials
Get startedStart watching lesson 1 now!The Economist“This month fast.ai, an education non-profit based in San Francisco, kicked off the third year of its course in deep learning. Since its inception it has attracted more than 100,000 students, scattered ar
2024-06-06 08:47:26 799
原创 Kaggle
Kaggle is the world’s largest data science community. One of Kaggle’s features is “Notebooks”, which is “a cloud computational environment that enables reproducible and collaborative analysis”. In particular, Kaggle provides access to GPUs for free. Every
2024-06-03 08:38:51 914
原创 Forums
If you need help, there’s a wonderful online community ready to help you at forums.fast.ai. Before asking a question on the forums, search carefully to see if your question has been answered before. (The forum system won’t let you post until you’ve spent a
2024-06-03 08:38:10 430
原创 23、超分辨率(Super-resolution)
In this lesson, we work with Tiny Imagenet to create a super-resolution U-Net model, discussing dataset creation, preprocessing, and data augmentation. The goal of super-resolution is to scale up a low-resolution image to a higher resolution. We train the
2024-05-29 15:50:12 1586
原创 22、Karras et al (2022)
Jeremy begins this lesson with a discussion of improvements to the DDPM/DDIM implementation. He explores the removal of the concept of an integral number of steps, making the process more continuous. He then delves into predicting the amount of noise in an
2024-05-28 08:38:47 706
原创 21、DDIM
In this lesson, Jeremy, Johno, and Tanishq discuss their experiments with the Fashion-MNIST dataset and the CIFAR-10 dataset, a popular dataset for image classification and generative modeling. They introduce Weights and Biases (W&B), an experiment trackin
2024-05-28 08:37:58 306
原创 20、混合精度(Mixed Precision)
In this lesson, we dive into mixed precision training and experiment with various techniques. We introduce the MixedPrecision callback for PyTorch and explore the Accelerate library from HuggingFace for speeding up training loops. We also learn a sneaky tr
2024-05-27 15:40:14 493
原创 19、DDPM和辍学(DDPM and Dropout)
In this lesson, Jeremy introduces Dropout, a technique for improving model performance, and with special guests Tanishq and Johno he discusses Denoising Diffusion Probabilistic Models (DDPM), the underlying foundational approach for diffusion models. The l
2024-05-27 15:39:29 304
原创 18、加速SGD和ResNets(Accelerated SGD & ResNets)
In this lesson, we dive into various stochastic gradient descent (SGD) accelerated approaches, such as momentum, RMSProp, and Adam. We start by experimenting with these techniques in Microsoft Excel, creating a simple linear regression problem and applying
2024-05-24 15:21:55 351
原创 17、初始化/规范化(Initialization/normalization)
In this lesson, we discuss the importance of weight initialization in neural networks and explore various techniques to improve training. We start by introducing changes to the miniai library and demonstrate the use of HooksCallback and ActivationStats for
2024-05-24 15:21:03 307
原创 16、学习者框架(The Learner framework)
In Lesson 16, we dive into building a flexible training framework called the learner. We start with a basic callbacks Learner, which is an intermediate step towards the flexible learner. We introduce callbacks, which are functions or classes called at spec
2024-05-17 13:21:47 816
原创 15、自动编码器(Autoencoders)
We start with a dive into convolutional autoencoders and explore the concept of convolutions. Convolutions help neural networks understand the structure of a problem, making it easier to solve. We learn how to apply a convolution to an image using a kernel
2024-05-17 13:20:33 427
原创 13、反向传播和MLP(Backpropagation & MLP)
【代码】13、反向传播和MLP(Backpropagation & MLP)
2024-05-16 08:59:30 373
原创 12、均移聚类(Mean shift clustering)
In this lesson, we start by discussing the CLIP Interrogator, a Hugging Face Spaces Gradio app that generates text prompts for creating CLIP embeddings. We then dive back into matrix multiplication, using Einstein summation notation and torch.einsum to sim
2024-05-15 09:08:53 473
原创 11、矩阵乘法(Matrix multiplication)
In this lesson, we discuss various techniques and experiments shared by students on the forum, such as interpolating between prompts for visually appealing transitions and improving the update process in text-to-image generation, and a novel approach to de
2024-05-15 09:08:03 480 1
原创 摘要Summaries--课时四(Lesson 4)
New and Exciting Content Why Hugging Face transformer Will we in this lecture fine-tune a pretrained NLP model with HF rather than fastai library? Why use transformer rather than fastai library? Is Jeremy in the process of integrating transformer int
2024-05-10 08:46:04 903
原创 摘要Summaries--课时三(Lesson 3)
Introduction and survey “Lesson 0” How to fast.ai Where is Lesson 0 video? What does it to do with the book ‘meta learning’ and fastai course? How to do a fastai lesson? Watch with note Run the notebook and experiment Reproduce the notes from t
2024-05-10 08:44:13 699
TCP协议socke套接字(服务端&客户端)网络协议配置代码
2023-07-28
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人