Big Data

Big Data

sourse: https://words.sdsc.edu/words-data-science/big-data
The term ‘big data’ seems to be popping up everywhere these days. And there seems to be as many uses of this term as there are contexts in which you find it: ‘big data’ is often used to refer to any dataset that is difficult to manage using traditional database systems; it is also used as a catch-all term for any collection of data that is too large to process on a single server; yet others use the term to simply mean “a lot of data”; sometimes it turns out it doesn’t even have to be large. So what exactly is big data?

A precise specification of ‘big’ is elusive. What is considered big for one organization may be small for another. What is large-scale today will likely seem small-scale in the near future; petabyte is the new terabyte. Thus, size alone cannot specify big data. The complexity of the data is an important factor that must also be considered.

Most now agree with the characterization of big data using the 3 V’s coined by Doug Laney of Gartner:

  • Volume: This refers to the vast amounts of data that is generated every second/minute/hour/day in our digitized world.
  • Velocity: This refers to the speed at which data is being generated and the pace at which data moves from one point to the next.
  • Variety: This refers to the ever-increasing different forms that data can come in, e.g., text, images, voice, geospatial.
    A fourth V is now also sometimes added:

  • Veracity: This refers to the quality of the data, which can vary greatly.
    The above V’s are the dimensions that characterize big data, and also embody its challenges: We have huge amounts of data, in different formats and varying quality, that must be processed quickly.

It is important to note that the goal of processing big data is to gain insight to support decision-making. It is not sufficient to just be able to capture and store the data. The point of collecting and processing volumes of complex data is to understand trends, uncover hidden patterns, detect anomalies, etc. so that you have a better understanding of the problem being analyzed and can make more informed, data-driven decisions. In fact, many consider value as the fifth V of big data:

  • Value: Processing big data must bring about value from insights gained.
    To address the challenges of big data, innovative technologies are needed. Parallel, distributed computing paradigms, scalable machine learning algorithms, and real-time querying are key to analysis of big data. Distributed file systems, computing clusters, cloud computing, and data stores supporting data variety and agility are also necessary to provide the infrastructure for processing of big data. Workflows provide an intuitive, reusable, scalable and reproducible way to process big data to gain verifiable value from it in and enable application of same methods to different datasets.

With all the data generated from social media, smart sensors, satellites, surveillance cameras, the Internet, and countless other devices, big data is all around us. The endeavor to make sense out of that data brings about exciting opportunities indeed!

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Examine the problem of maintaining the quality of big data and discover novel solutions. You will learn the four V’s of big data, including veracity, and study the problem from various angles. The solutions discussed are drawn from diverse areas of engineering and math, including machine learning, statistics, formal methods, and the Blockchain technology. Veracity of Big Data serves as an introduction to machine learning algorithms and diverse techniques such as the Kalman filter, SPRT, CUSUM, fuzzy logic, and Blockchain, showing how they can be used to solve problems in the veracity domain. Using examples, the math behind the techniques is explained in easy-to-understand language. Determining the truth of big data in real-world applications involves using various tools to analyze the available information. This book delves into some of the techniques that can be used. Microblogging websites such as Twitter have played a major role in public life, including during presidential elections. The book uses examples of microblogs posted on a particular topic to demonstrate how veracity can be examined and established. Some of the techniques are described in the context of detecting veiled attacks on microblogging websites to influence public opinion. What You'll Learn Understand the problem concerning data veracity and its ramifications Develop the mathematical foundation needed to help minimize the impact of the problem using easy-to-understand language and examples Use diverse tools and techniques such as machine learning algorithms, Blockchain, and the Kalman filter to address veracity issues Who This Book Is For Software developers and practitioners, practicing engineers, curious managers, graduate students, and research scholars
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值