From Galileo to Google: How Big Data Illuminates Human Culture

Given my longtime fascination with the so-termed digital humanities and with data visualization, and my occasional dabbles in the intersection of the two, I’ve followed the work of data scholars Erez Aiden and Jean-Baptiste Michel with intense interest since its public beginnings. Now, they have collected and contextualized their findings in the compelling Uncharted: Big Data as a Lens on Human Culture (public library) — a stimulating record of their seven-year quest to quantify cultural change through the dual lens of history and digital data by analyzing the contents of the 30,000 books digitized by Google, using Google’s Ngram viewer tool to explore how the usage frequency of specific words changes over time and what that might reveal about corresponding shifts in our cultural values and beliefs about economics, politics, health, science, the arts, and more.

Aiden and Michel, who met at Harvard’s Program for Evolutionary Dynamics and dubbed their field of research “culturomics,” contextualize the premise:

At its core, this big data revolution is about how humans create and preserve a historical record of their activities. Its consequences will transform how we look at ourselves. It will enable the creation of new scopes that make it possible for our society to more effectively probe its own nature. Big data is going to change the humanities, transform the social sciences, and renegotiate the relationship between the world of commerce and the ivory tower.

And big data is indeed big — humongous, even. Each of us, on average, has an annual data footprint of nearly one terabyte, and together we amount to a staggering five zettabytes per year. Since each byte consists of eight bits — short for “binary digits,” with each bit representing a binary yes-no question answered either by a 1 (“yes”) or a 0 (“no”) — humanity’s aggregate annual data footprint is equivalent to a gobsmacking forty sextillion (40,000,000,000,000,000,000,000) bits. Aiden and Michel humanize these numbers, so challenging for the human brain to grasp, with a pause-giving analog analogy:

If you wrote out the information contained in one megabyte by hand, the resulting line of 1s and 0s would be more than five times as tall as Mount Everest. If you wrote out one gigabyte by hand, it would circumnavigate the globe at the equator. If you wrote out one terabyte by hand, it would extend to Saturn and back twenty-five times. If you wrote out one petabyte by hand, you could make a round trip to the Voyager 1 probe, the most distant man-made object in the universe. If you wrote out one exabyte by hand, you would reach the star Alpha Centauri. If you wrote out all five zettabytes that humans produce each year by hand, you would reach the galactic core of the Milky Way. If instead of sending e-mails and streaming movies, you used your five zettabytes as an ancient shepherd might have—to count sheep—you could easily count a flock that filled the entire universe, leaving no empty space at all.

But what makes our age unlike any preceding era is precisely that this information exists not as handwritten documents but as digital data, which opens up wholly new frontiers of making sense of the meaning embedded in these seemingly meaningless strings of 1′s and 0′s. Aiden and Michel put it beautifully:

Like an optic lens, which makes it possible to reliably transform and manipulate light, digital media make it possible to reliably transform and manipulate information. Given enough digital records and enough computing power, a new vantage point on human culture becomes possible, one that has the potential to make awe-inspiring contributions to how we understand the world and our place in it.

Aiden and Michel have focused their efforts on one particular, and particularly important, aspect of the big-data universe: books. More specifically, the more than 30 million books digitized by Google, or roughly a quarter of humanity’s existing books. They call this digital library “one of the most fascinating datasets in the history of history,” and it certainly is — not only due to its scale, which exceeds the collections of any university library, from Oxford’s 11 million volumes to Harvard’s 17 million, as well as the National Library of Russia with its 15 million and the National Library of China with its 26 million. At the outset of Aiden and Michel’s project, the only analog library still greater than the Google Books collection was the Library of Congress, which contains 33 million — but Google may well have surpassed that number by now.

Still, big data presents a number of problems. For one, it’s messy — something that doesn’t sit well with scientists’ preference for “carefully constructed questions using elegant experiments that produce consistently accurate results,” Aiden and Michel point out. By contrast, a big dataset tends to be “a miscellany of facts and measurements, collected for no scientific purpose, using an ad hoc procedure … riddled with errors, and marred by numerous, frustrating gaps.”

To further complicate things, big data doesn’t comply with the basic premise of the scientific method — rather than eventuating causal relationships borne out of pre-existing hypotheses, it presents a seemingly bottomless pit of correlations awaiting discovery, often through the combination of doggedness and serendipity, an approach diametrically opposed to hypothesis-driven research. But that, arguably, is exactly what makes big data so alluring — as Stuart Firestein has argued in his fantastic case for why ignorance rather than certitude drives science, modern science could use what the scientific establishment so readily dismisses as “curiosity-driven research” — exploratory, hypothesis-free investigations of processes, relationships, and phenomena.

Michel and Aiden address these biases of science:

As we continue to stockpile unexplained and underexplained patterns, some have argued that correlation is threatening to unseat causation as the bedrock of scientific storytelling. Or even that the emergence of big data will lead to the end of theory. But that view is a little hard to swallow. Among the greatest triumphs of modern science are theories, like Einstein’s general relativity or Darwin’s evolution by natural selection, that explain the cause of a complex phenomenon in terms of a small set of first principles. If we stop striving for such theories, we risk losing sight of what science has always been about. What does it mean when we can make millions of discoveries, but can’t explain a single one? It doesn’t mean that we should give up on explaining things. It just means that we have our work cut out for us.

Such curiosity-driven inquiries speak to the heart of science — the eternal question of what science actually is — which Michel and Aiden capture elegantly:

What makes a problem fascinating? No one really agrees. It seemed to us that a fascinating question was something that a young child might ask, that no one knew how to answer, and for which a few person-years of scientific exploration — the kind of effort we could muster ourselves — might result in meaningful progress. Children are a great source of ideas for scientists, because the questions they ask, though superficially simple and easy to understand, are so often profound.

Indeedindeed.

The promise of big data, it seems, is at once to return us to the roots of our childlike curiosity and to advance science to new frontiers of understanding the world. Much like the invention of the telescope transformed modern science and empowered thinkers like Galileo to spark a new understanding of the world, the rise of big data, Aiden and Michel argue, offers to “create a kind of scope that, instead of observing physical objects, would observe historical change” — and, in the process, to catapult us into unprecedented heights of knowledge:

The great promise of a new scope is that it can take us to uncharted worlds. But the great danger of a new scope is that, in our enthusiasm, we too quickly pass from what our eyes see to what our mind’s eye hopes to see. Even the most powerful data yields to the sovereignty of its interpreter. … Through our scopes, we see ourselves. Every new lens is also a new mirror.

They illustrate this with an example by way of Galileo himself, who began a series of observations of Mars in the fall of 1610 and soon noticed something remarkably curious: Mars seemed to be getting smaller and smaller as the months progressed, shrinking down to a third of its September size by December. This, of course, indicated that the planet was drifting farther and farther from Earth, which went on to become that essential piece of evidence demonstrating that the Ptolemic idea of the geocentric universe was wrong: Earth wasn’t at the center of the cosmos, and the planets were moving according to their own orbits.

But Galileo, with this primitive telescope, couldn’t see any detail of red planet’s surface — that didn’t happen until centuries later when an astronomer by the name of Giovanni Schiaparelli aimed his far more powerful telescope at Mars. Suddenly, before his eyes were mammoth ridges that covered the planet’s surface like painted lines. These findings made their way to a man named Percival Lowell and impressed him so that in 1894, he built an entire observatory in Flagstaff, Arizona, equipped with a yet more powerful telescope, so that he could observe those mysterious lines. Lowell and his team went on to painstakingly record and map Mars’s mesh of nearly 700 criss-crossing “canals,” all the while wondering how they might have been created.

One of Lowell's drawings of the Martian canals.

Turning to the previous century’s theory that Mars’s scarce water reserves were contained in the planet’s frozen poles, Lowell assumed that the lines were a meticulous network of canals made by the inhabitants of a perishing planet in an effort to rehydrate it back to life. Based solely on his telescopic observations and the hypotheses of yore, Lowell concluded that Mars was populated by intelligent life — a “discovery” that at once excited and riled the scientific community, and even permeated popular culture. Even Henry Norris Russell, the unofficial “dean of American astronomers,” called Lowell’s ideas “perhaps the best of the existing theories, and certainly the most stimulating to the imagination.” And so they were — by 1898, H.G. Wells had penned The War of the Worlds.

While Lowell’s ideas dwindled in the centuries that followed, they still held their appeal. It wasn’t until NASA’s landmark Mariner mission beamed back close-up photos of Mars — the significance of which Carl Sagan, Ray Bradbury, and Arthur C. Clarke famously debated — that the anticlimactic reality set in: There were no fanciful irrigation canals, and no little green men who built them.

The moral, as Aiden and Michel point out, is that “Martians didn’t come from Mars: They came from the mind of [Lowell].”

What big data offers, then, is hope for unbridling some of our cultural ideas and ideologies from the realm of myth and anchoring them instead to the spirit of science — which brings us to the crux of the issue:

Digital historical records are making it possible to quantify our human collective as never before.

[…]

Human history is much more than words can tell. History is also found in the maps we drew and the sculptures we crafted. It’s in the houses we built, the fields we kept, and the clothes we wore. It’s in the food we ate, the music we played, and the gods we believed in. It’s in the caves we painted and the fossils of the creatures that came before us. Inevitably, most of this material will be lost: Our creativity far outstrips our record keeping. But today, more of it can be preserved than ever before.

What makes Aiden and Michel’s efforts particularly noteworthy, however, is that they are as much a work of scrupulous scholarship as of passionate advocacy. They are doing for big data in the humanities what Neil deGrasse Tyson has been doing for space exploration, instigating both cultural interest and government support. They remind us that in today’s era of big science, where the Human Genome Project’s price tag was $3 billion and the Large Hadron Collider’s quest for the Higgs boson cost $9 billion, there is an enormous disconnect between the cultural value of the humanities and the actual price we put on better understanding human history — by contrast to such big science enterprises, the entire annual budget of the National Endowment for the Humanities is a mere $150 million. Michel and Aiden remind us just what’s at stake:

The problem of digitizing the historical record represents an unprecedented opportunity for big-science-style work in the humanities. If we can justify multibillion-dollar projects in the sciences, we should also consider the potential impact of a multibillion-dollar project aimed at recording, preserving, and sharing the most important and fragile tranches of our history to make them widely available for ourselves and our children. By working together, teams of scientists, humanists, and engineers can create shared resources of extraordinary power. These efforts could easily seed the Googles and Facebooks of tomorrow. After all, both these companies started as efforts to digitize aspects of our society. Big humanities is waiting to happen.

And yet the idea is nothing new. Count on the great Isaac Asimov to have presaged it, much like he did online educationthe fate of space exploration, and even Carl Sagan’s rise to stardom. In his legendary Foundation trilogy, Asimov conceives his hero, Hari Seldon, as a masterful mathematician who can predict the future through complex mathematical equations rooted in aggregate measurements about the state of society at any given point in time. Like Seldon, who can’t anticipate what any individual person will do but can foreshadow larger cultural outcomes, big data, Aiden and Michel argue, is the real-life equivalent of Asimov’s idea, which he termed “psychohistory” — an invaluable tool for big-picture insight into our collective future.

Perhaps more than anything, however, big data holds the promise of righting the balance of quality over quantity in our culture of information overabundance, helping us to extract meaning from (digital) matter. In a society that tweets more words every hour than all of the surviving ancient Greek texts combined, we certainly could use that.

Uncharted is an excellent and timely read in its entirety, both as a curious window into the secret life of language and as an important piece of advocacy for the value of the digital humanities in the age of data. Sample the project with Aiden and Michel’s entertaining and illuminating TED talk:

智慧旅游解决方案利用云计算、物联网和移动互联网技术,通过便携终端设备,实现对旅游资源、经济、活动和旅游者信息的智能感知和发布。这种技术的应用旨在提升游客在旅游各个环节的体验,使他们能够轻松获取信息、规划行程、预订票务和安排食宿。智慧旅游平台为旅游管理部门、企业和游客提供服务,包括政策发布、行政管理、景区安全、游客流量统计分析、投诉反馈等。此外,平台还提供广告促销、库存信息、景点介绍、电子门票、社交互动等功能。 智慧旅游的建设规划得到了国家政策的支持,如《国家中长期科技发展规划纲要》和国务院的《关于加快发展旅游业的意见》,这些政策强调了旅游信息服务平台的建设和信息化服务的重要性。随着技术的成熟和政策环境的优化,智慧旅游的时机已经到来。 智慧旅游平台采用SaaS、PaaS和IaaS等云服务模式,提供简化的软件开发、测试和部署环境,实现资源的按需配置和快速部署。这些服务模式支持旅游企业、消费者和管理部门开发高性能、高可扩展的应用服务。平台还整合了旅游信息资源,提供了丰富的旅游产品创意平台和统一的旅游综合信息库。 智慧旅游融合应用面向游客和景区景点主管机构,提供无线城市门户、智能导游、智能门票及优惠券、景区综合安防、车辆及停车场管理等服务。这些应用通过物联网和云计算技术,实现了旅游服务的智能化、个性化和协同化,提高了旅游服务的自由度和信息共享的动态性。 智慧旅游的发展标志着旅游信息化建设的智能化和应用多样化趋势,多种技术和应用交叉渗透至旅游行业的各个方面,预示着全面的智慧旅游时代已经到来。智慧旅游不仅提升了游客的旅游体验,也为旅游管理和服务提供了高效的技术支持。
智慧旅游解决方案利用云计算、物联网和移动互联网技术,通过便携终端设备,实现对旅游资源、经济、活动和旅游者信息的智能感知和发布。这种技术的应用旨在提升游客在旅游各个环节的体验,使他们能够轻松获取信息、规划行程、预订票务和安排食宿。智慧旅游平台为旅游管理部门、企业和游客提供服务,包括政策发布、行政管理、景区安全、游客流量统计分析、投诉反馈等。此外,平台还提供广告促销、库存信息、景点介绍、电子门票、社交互动等功能。 智慧旅游的建设规划得到了国家政策的支持,如《国家中长期科技发展规划纲要》和国务院的《关于加快发展旅游业的意见》,这些政策强调了旅游信息服务平台的建设和信息化服务的重要性。随着技术的成熟和政策环境的优化,智慧旅游的时机已经到来。 智慧旅游平台采用SaaS、PaaS和IaaS等云服务模式,提供简化的软件开发、测试和部署环境,实现资源的按需配置和快速部署。这些服务模式支持旅游企业、消费者和管理部门开发高性能、高可扩展的应用服务。平台还整合了旅游信息资源,提供了丰富的旅游产品创意平台和统一的旅游综合信息库。 智慧旅游融合应用面向游客和景区景点主管机构,提供无线城市门户、智能导游、智能门票及优惠券、景区综合安防、车辆及停车场管理等服务。这些应用通过物联网和云计算技术,实现了旅游服务的智能化、个性化和协同化,提高了旅游服务的自由度和信息共享的动态性。 智慧旅游的发展标志着旅游信息化建设的智能化和应用多样化趋势,多种技术和应用交叉渗透至旅游行业的各个方面,预示着全面的智慧旅游时代已经到来。智慧旅游不仅提升了游客的旅游体验,也为旅游管理和服务提供了高效的技术支持。
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值