生物信息学生物影像_大规模生物学的五点原因

生物信息学生物影像

Technology and Scale’s love affair is well-known, but their adoption of Biology has had its rough patches. Why is now different?

Technology and Scale的恋爱是众所周知的,但是他们对Biology的采用却遇到了困难。 为什么现在不同?

From the early days of high throughput compound screening, to the more recent promises of genomics in precision medicine, the biotech industry has tended to overestimate the impact of “scale” on discovery.

高通量化合物筛选早期到精确医学中基因组学的最新发展,生物技术行业都倾向于高估“规模”对发现的影响。

This is scale in experimentation, process, synthesis, and computation. Scale that has emerged in the forms of crafty chemistry combinatorics to create millions of compounds, gargantuan screening facilities capable of speedy readouts, massive omics datasets, and much more.

这是实验,过程,合成和计算的规模。 规模已经以诡计多端的化学组合剂的形式出现,以创建数百万种化合物,能够快速读数的庞大筛选设备,庞大的组学数据集等等。

And those are the legitimate efforts. I’m not even going to get into the unproven claims of scale that can pop up in biotech, à la a certain company ending in “heranos”.

这些都是合法的努力。 我什至不会涉足生物技术中可能出现的未经证实的规模要求, 以“ heranos结尾的某家公司。

While not entirely in vain, attempts at transforming biological discovery with scale have often fallen short, or at the very least taken far longer than expected to buck the hype cycle and make it out of the trough of disillusionment. The massive data sets traditionally produced (or pieced together) as a result of biology at scale have been in many ways flawed, resulting in limited translational value. Further still, there has been a gap between the massive data we currently have available, and our ability to translate that data into something we can run tangible experiments against.

尽管并非徒劳无功,但试图大规模地转变生物发现的尝试往往失败了,或者至少比预期的要花费更长的时间才能打破炒作周期并使之脱离幻灭低谷 。 传统上由于大规模生物学而产生(或拼凑)的海量数据集在许多方面都存在缺陷,导致翻译价值有限。 更进一步,我们目前可获得的海量数据与我们将数据转换为可以进行有形实验的能力之间还存在差距。

Image for post
Gaping void 无效

In other words, More is not Better, biology is complex, and the biotech industry has the costly clinical failures to show for it. Adding insult to injury, the term “Eroom’s Law” has been coined, contrasting the negative trends in drug discovery success rate with the ever-upwards climb of compute power in accordance with Moore’s Law.

换句话说,更多不是更好,生物学是复杂的,而生物技术产业有昂贵的临床失败可证明这一点。 不仅增加了侮辱性伤害,还创造了“室定律”一词,将药物发现成功率的负面趋势与根据摩尔定律不断提高的计算能力进行了对比。

But there’s a light. Even in the past decade, something of a reversal in the success rate of drug development has occurred.

但是有灯。 即使在过去的十年中,药物开发的成功率也发生了逆转

Image for post
Ringel et al Ringel等

There are many reasons for this potential about-face (friendly FDA, rare genetic disease focus, etc.), but the industry’s re-imagining of what scale means is one reason that becomes ever-more relevant. We are now at a juncture in how we generate and interpret biology at scale.

这种潜在的面貌有很多原因(友好的FDA,罕见的遗传疾病重点等),但是业界重新思考规模的含义是变得越来越重要的原因之一。 现在,我们正处于大规模生成和解释生物学的关键时刻。

We are moving out of an era of brute force in biological experimentation, and into a new era of relevant, intelligent, and validated scale in biology.

我们正在摆脱生物学实验的蛮力时代,进入生物学相关,智能和经过验证的规模的新时代。

This application of scale is poised to actually disrupt biological discovery productivity. In this new era terms like “high throughput”, “massive”, and “automated”, will prove real merit, instead of triggering immediate skepticism in the eyes of your friendly neighborhood pharma executive.

这种规模的应用有望真正破坏生物发现的生产力。 在这个新时代,诸如“高通量”,“大规模”和“自动化”之类的术语将被证明是真正的优点,而不是引起您附近的社区药业高管眼前的怀疑。

We will see functional genomics platforms identifying and validating biological targets at unprecedented speeds; the advent of relevant computational approaches quickly narrowing solution spaces; the rapid, intelligent optimization of technologies that up-level our control over biology. And much more.

我们将看到功能基因组学平台以前所未有的速度识别和验证生物学目标。 相关计算方法的出现Swift缩小了求解空间; 快速,智能的技术优化提升了我们对生物学的控制。 以及更多。

A healthy amount of caution is necessary though — scale will never be the sole proprietor of the biological discovery engine. It will be the fuel that powers it.

但是,必须保持健康的谨慎态度-规模永远不会成为生物发现引擎的唯一所有者。 它将为其提供动力。

But if we proceed vigilantly and point our ship in the right direction, an era of relevant, intelligent, validated scale is poised to revolutionize biological experimentation, process, and computation to transform our knowledge of sub-cellular omics, cells, systems, and bodies.

但是,如果我们保持警惕并朝着正确的方向前进,那么一个相关的,智能的,经过验证的规模时代将为生物学实验,过程和计算带来革命性变革从而转变我们对亚细胞组学,细胞,系统和身体的认识。

What is catalyzing this era of relevant, intelligent, and validated scale?

是什么在催生这个具有相关性,智能性和经过验证的规模的时代?

  1. Biologically relevant datasets are revealing subtle insights

    与生物相关的数据集揭示了细微的见解

  2. Multiparametric and multiplex experimentation is transforming the validation and throughput of biology

    多参数和多重实验正在改变生物学的验证和通量

  3. “Intelligent automation” is enabling reproducible and optimized biology

    “智能自动化”可实现可重现和优化的生物学

  4. “Full-stack” biotechs are modularizing data generation

    “全栈”生物技术正在模块化数据生成

  5. Unbiased data and computational advances are converging to create predictive models of biology

    无偏差的数据和计算的进步正在汇聚在一起,以创建生物学的预测模型

#1:与生物相关的数据集揭示了细微的见解 (#1: Biologically relevant datasets are revealing subtle insights)

A certain founder of Microsoft was once quoted as saying —

曾经有人引用微软的一位创始人的话说:

Automation applied to an effective operation will magnify the efficiency. Automation applied to an inefficient operation will magnify the inefficiency.

将自动化应用于有效的操作将提高效率。 将自动化应用于效率低下的操作会放大效率低下的情况。

A parallel can be applied to biology —

并行可以应用于生物学-

Applying scale to generate irrelevant biological data will distort your findings. Applying scale to create relevant biological data will enable subtle insight.

应用量表生成不相关的生物学数据会扭曲您的发现。 应用规模来创建相关的生物学数据将使您获得精妙的见识。

Biologically relevant data is that which provides the most representative view of how our bodies function. This data is free of artifacts, is limited in noise, and is often derived from models with genetic, regulatory, metabolic, spatial, and temporal characteristics representative of our own internal machinery. Our ability to obtain this biologically relevant data has been greatly enhanced in recent years due to advancements leveraging novel integrations of chemistry, microfluidics, and microscopy. Such technologies have already enabled massive progress in the field of genomics, and we are now in a golden age of their application to all areas of biotechnology. The result has been the generation of biologically relevant data at scale, including —

与生物相关的数据可以提供有关我们身体功能的最具代表性的观点。 该数据没有伪像,受噪声限制,并且通常从具有代表我们自己内部机制的遗传,调节,代谢,空间和时间特征的模型得出。 近年来,由于利用化学,微流控和显微镜的新型集成所取得的进步,我们获得此生物学相关数据的能力已大大增强。 这样的技术已经在基因组学领域取得了巨大的进步,我们现在正处于将其应用于生物技术所有领域的黄金时代。 结果是大规模生成了生物学相关数据,包括-

  • Physiologically relevant data, produced from representative models such as pluripotent stem cells, primary cells, co-cultures, and organoids.

    与生理学相关的数据,是从代表性模型(例如多能干细胞,原代细胞,共培养物和类器官)产生的。

  • High-resolution data, using techniques such as single-cell analysis, spatial omics, and high content imaging.

    使用单细胞分析,空间组学和高内涵成像等技术获得高分辨率数据

  • Genetically validated data, particularly enabled by the advent of genome perturbation tools such as CRISPR. Coming from a time when we tested many hypotheses against few genetic backgrounds, we are now flipping the script and realizing the biological implications of genetic diversity.

    经过基因验证的数据 ,尤其是通过CRISPR等基因组扰动工具的出现而实现。 从我们在很少的遗传背景下检验许多假设的时代开始,我们现在正在翻转剧本并意识到遗传多样性的生物学含义。

  • Temporal data, captured over time as opposed to an inconclusive snapshot. This could include more frequent timepoints over the course of a gene expression experiment, or tracking patient biomarkers against disease progression over the course of 10 years.

    随时间捕获的时间数据,而不是不确定的快照。 这可能包括在基因表达实验过程中更频繁的时间点,或者在10年的过程中跟踪患者生物标记物以对抗疾病进展。

Among many others. Through such data, we will be able to tease out signal from noise, and actually derive meaning from biology at scale instead of compounding the confusion.

除其他外。 通过这些数据,我们将能够从噪声中挑出信号,并从规模上真正地从生物学中获得意义,而不会加重混淆。

#2:多参数和多重实验正在改变生物学的通量 (#2: Multiparametric and multiplex experimentation is transforming the throughput of biology)

I know, this article started by calling out the shortcomings of some of today’s massive, multi-dimensional datasets. Sure, these datasets have aggregated various measures of genome, epigenome, phenome, metabolome, etc…but they have done so in a highly piecemeal way. When even the way a pipette is held can impact an experiment, the batch effects, lack of standardization, and inherent variability in broadly aggregated datasets can make their findings indicative, rather than conclusive.

我知道,本文首先指出了当今一些庞大的多维数据集的缺点。 当然,这些数据集已经汇总了基因组,表观基因组,现象组,代谢组等的各种度量,但是它们是以非常零散的方式进行的。 即使移液器的放置方式也可能影响实验,但批处理效果,缺乏标准化以及广泛聚集的数据集中的固有变异性可能会使其发现具有指示意义,而不是确定性的。

This is not to say existing troves of data are useless — they are invaluable, but only to a point. Systematic, incremental data generation will be critical for realizing the full value of aggregated datasets. Such incremental data generation 1) validates insights from aggregate datasets, and 2) fills the gaps.

这并不是说现有的大量数据是无用的,它们是无价的,只是到了一定程度。 系统的增量数据生成对于实现聚合数据集的全部价值至关重要。 这样的增量数据生成1)验证了来自汇总数据集的见解,并且2)填补了空白。

Novel advances in multiparametric and/or multiplex experimentation platforms are addressing this need for incremental data generation, through the collection of massive, richly described, standardized datasets in single-experiments.

通过在单个实验中收集大量,内容丰富的标准化数据集,多参数和/或多重实验平台中的新颖进步正在满足这种增量数据生成的需求。

  1. Multiparametric experimentation involves the collection of multiple, potentially orthogonal readouts at once. We are now frequently seeing such variables being measured simultaneously — including tandem measures of cell morphology, cellular motility, gene expression, spatio-temporal variability, and more.

    多参数实验涉及一次收集多个可能正交的读数。 现在,我们经常看到同时测量这些变量,包括串联测量细胞形态,细胞运动性,基因表达,时空变异性等等。

  2. Multiplex experimentation involves the simultaneous processing of many biological events or components of a single type (e.g. sequencing of many pieces of DNA in parallel, identification of many cell surface markers at once, detection of many different metabolites in unison). In particular, we are seeing an emergence of “library on library” screening approaches, in which target libraries (usually proteins) are screened against a modifying entity (antibody, small molecule, T cell receptor, etc.)

    多重实验涉及同时处理多种生物事件或单一类型的成分(例如,并行测序许多DNA,同时鉴定许多细胞表面标志物,同时检测许多不同的代谢物)。 特别是,我们看到了“文库文库”筛选方法的出现,其中针对修饰实体(抗体,小分子,T细胞受体等)筛选目标文库(通常是蛋白质)。

These “multi-experimentation” approaches are well-suited to increase the standardization, throughput, and validation of generated data. By maximizing efficiency and minimizing confounding variability, such datasets are ideal for validating the implications — and filling in the gaps — of our existing knowledge. Multi-experimentation even goes a step further in the value it adds; when multiple variables or multiplex measures are collected in parallel, the dataset returns future value beyond the immediate study.

这些“多实验”方法非常适合提高生成数据的标准化,吞吐量和验证。 通过最大化效率和最小化混杂变异性,这些数据集是验证我们现有知识的含义和填补空白的理想选择。 多功能实验甚至在其增加的价值上走了一步。 当并行收集多个变量或多重度量时,数据集将返回超出即时研究的未来价值。

Notably, multiparametric/multiplex scale is valuable in the context of our modern understanding of disease. We now know that most complex diseases are a result of many genetic, epigenetic, and environmental variables.

值得注意的是,在我们对疾病的现代理解的背景下,多参数/多重量表是有价值的。 现在我们知道,大多数复杂疾病是许多遗传,表观遗传和环境变量的结果。

Multiexperimentation can provide the scale and context to both identify and drug multi-factorial causes of disease.

多种实验可以提供规模和背景,以识别和治疗疾病的多因素原因。

Multiparametric and multiplex platforms are reinventing the value of scale. Don’t take my word for it though — take a look at just a few of the groups revolutionizing the generation of multi-parametric and multiplexed data.

多参数和多重平台正在重新创造规模的价值。 不过,请不要相信我-只看一下彻底改变多参数和多路复用数据生成的几个小组。

  • Multiparametric: Recursion Pharma is powering the collection of dozens of cellular phenotypes simultaneously by leveraging high content imaging and machine learning-driven informatic pipelines.

    多参数: 递归制药通过利用高内涵成像和机器学习驱动的信息管道,同时为数十种细胞表型的收集提供动力。

  • Multiparametric: Freenome integrates assays for cell-free DNA, methylation, and proteins with machine learning techniques to understand additive signatures for early cancer detection.

    多参数: Freenome将无细胞DNA,甲基化和蛋白质的检测与机器学习技术相结合,以了解早期癌症检测的附加特征。

  • Multiplexed: Octant Bio is evaluating the impact of single molecules across thousands of GPCR targets in unison, in an effort to find the molecules that may best treat multi-factorial conditions such as neurodegeneration, and obesity.

    复用: Octant Bio正在统一评估数千种GPCR靶标中的单个分子的影响,以寻找最能治疗多因素疾病(例如神经变性和肥胖症)的分子。

  • Multiplexed: Tango Therapeutics is performing high-powered pooled CRISPR screens to evaluate the effects of genotype perturbation on phenotypic effect, in thousands of genes simultaneously.

    多重: Tango Therapeutics正在执行高性能的CRISPR汇总筛选,以同时在数千个基因中评估基因型摄动对表型效应的影响。

#3:“智能自动化”可实现可重现和优化的生物学 (#3: “Intelligent automation” is enabling reproducible and optimized biology)

While recently the spotlight has fallen on various robotic laboratory companies touting automated platforms, automation is not a novel tool in biotech. Traditionally though, automation has been applied towards relatively simple experimentation — think of DNA sequencing and synthesis, or small molecule compound screens in immortalized cell lines.

尽管最近各种机器人实验室公司都将目光投向了自动化平台,但自动化并不是生物技术领域的一种新颖工具。 但是传统上,自动化已应用于相对简单的实验-想想DNA测序和合成,或永生化细胞系中的小分子化合物筛选。

Today, there is a movement towards intelligent automation of experiments, which can be attributed to progress in two areas —

如今,正在朝着实验的智能自动化迈进这可以归因于两个领域的进步:

  1. Integration of sensors, readouts, and longitudinal data collection as part of automated workflows.

    传感器,读数器和纵向数据收集的集成,是自动化工作流程的一部分。
  2. Algorithmic optimization of automated workflows based on collected data.

    基于收集的数据的自动化工作流程的算法优化。

Firstly, such progress allows relevant biology data to be collected in a standardized manner, at scale. Given that over $28B/year is spent on non-reproducible biomedical research in the US alone, this standardization is critical. Secondly, by continuously optimizing experimental parameters, the best possible experimental biology protocols for generating data with minimal noise can be found.

首先,这种进展允许以标准化的方式大规模地收集相关的生物学数据。 鉴于仅在美国, 每年在不可重复的生物医学研究上的花费就超过$ 28B ,因此这一标准化至关重要。 其次,通过不断优化实验参数,可以找到用于以最小的噪声生成数据的最佳可能的实验生物学协议。

Most interestingly though, intelligent automation and iteration enable the rapid iteration of biological tools, technologies, and products.

但是,最有趣的是,智能自动化和迭代使生物工具,技术和产品得以快速迭代。

Intelligent scale of experimentation can identify the key factors to modify in a given biotechnology, and then optimize those factors themselves.

智能的实验规模可以确定要在给定生物技术中进行修改的关键因素,然后自己对这些因素进行优化。

Consider an example here — the engineering of an allogeneic, gene edited cell therapy. While the end goal is to be able to engineer a cell capable of destroying cancer cells, the first step would be to identify the “tool” with which you would genetically edit modifications into the cell. Such tools could include CRISPR/Cas9, TALENs, ZFNs, etc. To optimize a chosen tool, intelligent, automated experimentation would enable you to both identify the most important variables to optimize (e.g. ideal transfection conditions, gene editing components, editing enhancing reagents, etc.), and then to optimize the variables themselves. The resultant optimized technology toolkit could be used to perform complex edits such as site-specific gene knock-ins, multiplex genetic edits, and more, enabling the optimal designed therapy.

在这里考虑一个示例-异基因,基因编辑细胞疗法的工程设计。 尽管最终目标是能够工程改造能够破坏癌细胞的细胞,但第一步将是确定“工具”,您可以使用该工具通过遗传方式将修饰物编辑到细胞中。 这些工具可能包括CRISPR / Cas9,TALEN,ZFN等。要优化所选工具,智能的自动化实验将使您能够识别出最重要的变量进行优化(例如理想的转染条件,基因编辑成分,编辑增强试剂,等),然后优化变量本身。 由此产生的优化技术工具包可用于执行复杂的编辑,例如位点特异性基因敲入,多重基因编辑等,从而实现最佳的设计疗法。

Image for post
Satpathy et al Satpathy等

Such an optimization approach is relevant to many biological applications — viral vector design, enzyme engineering of designer nucleases, fermentation bioreactor processes, nanoparticle delivery formulations, and more. Thus, faster development cycle times attributable to intelligent automation will contribute to better biology tools, products, technologies, and therapies.

这种优化方法与许多生物学应用有关—病毒载体设计,设计者核酸酶的酶工程,发酵生物React器Craft.io,纳米颗粒递送配方等。 因此,归因于智能自动化的更快的开发周期将有助于更好的生物学工具,产品,技术和疗法。

#4:“全栈”生物技术平台正在模块化数据生成 (#4: “Full-stack” biotech platforms are modularizing data generation)

The concept of “full-stack” comes from the software world, meaning from back-end (databases and architecture) to front-end (customer interface), connected by software between.

“全栈”的概念来自软件世界,即从后端(数据库和体系结构)到前端(客户界面),两者之间通过软件连接。

The concept is relatively novel in the life sciences though, and there are two key components to full-stack approaches in the life sciences —

不过,该概念在生命科学中相对较新,在生命科学中,全栈方法有两个关键组成部分:

  1. Vertical integration of experimental workflows and reagents. Piecemeal workflows lead to inconsistent outcomes, and even biological components such as enzymes can carry distortion. Full-stack biotech platforms are now realizing the value of integrated experimentation from design → performance → analysis. By modularizing each of these steps, full-stack biotechs are able to integrate particular modules to achieve reproducible experimental outcomes of interest, at scale.

    纵向整合实验工作流程和试剂。 零散的工作流程会导致不一致的结果,甚至生物成分(例如酶)也可能产生失真。 现在,全栈式生物技术平台正在实现从设计→性能→分析的集成实验的价值。 通过模块化这些步骤中的每一个,全栈生物技术公司能够集成特定的模块,以实现大规模的可重复的感兴趣的实验结果。

  2. Feedback loops enabling troubleshooting, continuous improvement, and “data flywheels”. Full-stack hardware, and the integrated software “thread” that flows through it will enable the collection of data along the entire pathway of experimentation. Through such data collection, both troubleshooting and continuous improvement of quality, throughput, and signal occurs. Further, full-stack biology approaches enable “data flywheels”. In such a flywheel, each additional data point generated by the platform makes the subsequent data point easier to generate.

    反馈回路可实现故障排除持续改进和“数据飞轮”。 全栈硬件以及流经其中的集成软件“线程”将使整个实验路径的数据收集成为可能。 通过这种数据收集,可以进行故障排除以及质量,吞吐量和信号的持续改进。 此外,全栈生物学方法使“数据飞轮”成为可能。 在这样的飞轮中,平台生成的每个附加数据点使后续数据点更易于生成。

One field that has benefited significantly from full-stack approaches is that of synthetic biology. Here, biotech groups have integrated aspects of reagent engineering, experimental design and execution, and output application. Example synthetic biology companies leveraging full-stack platforms to perform experiments at scale include Synthego, Asimov, and Ginkgo.

从全栈方法中受益最大的领域是合成生物学 。 在这里,生物技术团队已将试剂工程,实验设计和执行以及输出应用程序整合在一起。 利用全栈平台大规模进行实验的合成生物学公司包括SynthegoAsimov 和Ginkgo

Image for post
Jessop-Fabre et al Jessop-Fabre等

#5:无偏数据和计算进展正在融合,以创建生物学的预测模型 (#5: Unbiased data and computational advances are converging to create predictive models of biology)

The scientific method (hypothesis → test hypothesis → analyze) has yielded many discoveries over time. Key word, discoveries. As such we often uncover biological insights through a combination of ingenuity and good old luck. Now, a fundamental shift is occurring in the way biologists, engineers, and computer scientists are deriving insight from biology at scale.

随着时间的流逝,科学的方法(假设→检验假设→分析)产生了许多发现。 关键字,发现。 因此,我们经常通过独创性和好运来发现生物学见解。 现在,生物学家,工程师和计算机科学家从大规模生物学中获得洞察力的方式正在发生根本性的转变。

In this new paradigm, the intended consumer of experimental data is not a scientist; it is an algorithm.

在这种新范式中,实验数据的预期使用者不是科学家。 这是一种算法。

Computational techniques have sometimes been applied in haphazard ways to biology. Despite this, many clever applications have risen from the scramble. Machine learning, for example, has been effectively applied to diverse challenges such as classification of high content cellular images, predictive diagnosis from multi-omic datasets, and virtual compound screens for de novo designed drugs. Such computational approaches are well-suited for deriving meaning from complex, multi-dimensional datasets, and have advanced in tandem with progress in compute power and access.

有时以偶然的方式将计算技术应用于生物学。 尽管如此,许多聪明的应用程序已经从争夺中崛起了。 例如,机器学习已有效地应用于各种挑战,例如高含量细胞图像的分类,来自多基因组数据集的预测诊断以及从头设计药物的虚拟化合物筛选。 这种计算方法非常适合从复杂的多维数据集推导含义,并且随着计算能力和访问的进步而不断发展。

Image for post
Goff 高夫

In the new era of scale, more efficient and relevant experimentation is enabling us to generate datasets perfectly suited for ML-based interpretation.

在新的规模时代,更高效,更相关的实验使我们能够生成完全适合基于ML的解释的数据集。

Such data sets are tagged with rich descriptors, in multiple layers, and include both positive and negative experimental outcomes (unbiased). These data sets are accompanied by contextualizing metadata that provides valuable insight into the journey of the data itself (from creation to processing to curation). These data sets are of immense size, and are being produced at unprecedented cycle times, further strengthening the algorithm’s capability to make predictions.

此类数据集在多层中标记有丰富的描述符,并且包括正实验结果和负实验结果(无偏见)。 这些数据集伴随有上下文化的元数据,这些元数据可提供有关数据本身(从创建到处理再到策展)过程的宝贵见解。 这些数据集规模巨大,并且以空前的周期产生,从而进一步增强了算法的预测能力。

We as humans are notoriously poor at understanding causation. With the right data sets enabled by advances in scale, and the right application of computation, we will make massive advances in realizing the relationships in complex datasets.

众所周知,我们作为人类在理解因果关系方面很差。 通过适当的数据集(随着规模的扩大)以及对计算的正确应用,我们将在实现复杂数据集中的关系方面取得巨大进步。

生物学的规模实际上将实现什么? (What will scale in biology actually achieve?)

Fundamentally, relevant, intelligent, and validated scale will provide biology researchers and biotech companies with two tangible advantages —

从根本上讲,相关的,智能的和经过验证的规模将为生物学研究人员和生物技术公司提供两个明显的优势:

  1. Generation of confidence-inspiring and novel data packages. Economic value in biotechnology lies squarely with the clinical assets that are generated, and scale will never be a substitute. Even in the early days of genomics, companies that pioneered scale (e.g. Celera) were outlasted by more asset-driven companies such as Plexxikon and Exelixis. Scale can however socialize concepts such as faster target validation, more descriptive data packages, and the use of data from genetics and other relatively novel spaces. Overall, scale will influence a biology researcher or drug developer’s conviction in a particular hypothesis.

    激发信心和新颖数据包的生成。 生物技术的经济价值与所产生的临床资产成正比,规模永远不会替代。 即使在基因组学的早期,开创规模的公司(例如Celera)也比资产驱动的公司(例如Plexxikon和Exelixis)落后。 然而,规模可以使诸如更快的目标验证,更具描述性的数据包以及遗传学和其他相对新颖的空间中的数据的使用等概念社会化。 总体而言,规模将影响生物学研究人员或药物开发人员对特定假设的信念。

  2. Optimization of experiments, technologies, and platforms. Scale will bring forth a new era in our ability to develop biotechnologies themselves. Intelligent iteration will be immensely enabling for companies looking to achieve outcomes from calibrating a functional readout, to designing a novel viral vector.

    优化实验,技术和平台。 规模将开创我们自身发展生物技术的能力的新纪元。 智能迭代将极大地帮助希望实现从校准功能读数到设计新型病毒载体的结果的公司。

In a commercial sense, biotech groups effectively leveraging scale will have the opportunity to capture both upstream and downstream value in the biotechnology value chain (from early research → clinical assets). As data is accumulated from valid experimentation at scale a competitive moat may be formed from the intellectual property, allowing such a group to generate further economic value.

从商业角度讲,有效利用规模的生物技术集团将有机会在生物技术价值链中捕捉上游和下游价值(从早期研究→临床资产)。 由于数据是通过有效的大规模实验积累而来的,因此知识产权可能形成竞争competitive势,从而使这一群体产生进一步的经济价值。

We are starting to see the first real impact of relevant, intelligent, and validated scale on the success rate of biological discovery. Despite this, the long-term impact is yet to be understood.

我们开始看到相关的,智能的和经过验证的规模对生物发现成功率的首次真正影响。 尽管如此,长期影响尚待了解。

We must identify the tools, technology, and platforms that usher in a new era of scale. We must also ID the areas of biology and applications in biotechnology that are most amenable to disruption.

我们必须确定迎接新规模时代的工具,技术和平台。 我们还必须确定最容易受到干扰的生物学领域和生物技术应用领域。

Biological discovery rarely happens fast, but as we build up these proofs we may see a true snowball-effect on the pace at which we are able to understand, derive insight from, and act on biology for the betterment of human health.

生物发现很少能很快发生,但是当我们建立这些证据时,我们可能会看到一个真正的滚雪球效应,即我们能够理解,获取见解并采取行动改善人类健康的速度。

翻译自: https://medium.com/@skasbeks/biology-at-scale-5-reasons-why-it-finally-matters-89122bf0d126

生物信息学生物影像

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值