学习python

学习python
2011年09月29日
   python真是个好东西,语法优雅,清晰,简洁;包含面向过程和面向对象,而且容易与 C/C++ 及java等主流编程语言连接。同时其扩展包相当丰富(也可能是这个星球扩展最丰富的编程语言了),最近不断发现python很多扩展包非常好用,所以就忍不住给python来个歌功诵德,当然也感谢读研时的舍友using deng,是从他那才得知这门非常优秀的计算机语言的。
  python涵盖程序设计、系统管理、图形界面、网页开发、科学计算、自然语言处理等,都非常强大,而且容易使用,当然最主要一点,是python是开源免费的,更更重要的一点是,它相当容易学,几乎不用任何计算机基础都能学会。推荐大家学这门语言,因为这是一门通用语言,能在几乎所有平台上运行(windows,linux,mac,android手机平台等),几乎适用于任何领域,难怪谷歌把它作为公司三大语言(C++,java,python)之一,同时也是黑客必学的四大语言之一。
  python资料相当丰富,而且几乎所有资料都能在网上下载到(当然有能力还是推荐买纸质书),有如下经典资料:
  [b]1.基础教程[/b]
  (1)学习python(第四版)中文版
  (2)python基础教程(第二版)中文版
  (3)python cookbook(第二版)中文版
  (4)python核心编程(第二版)中文版
  [b]2.图形界面GUI开发[/b](可以选择pyqt,pygtk,wxwidget等,推荐用pyqt,相当强大)
  (1)Rapid.GUI.Programming.with.Python.and.Qt
  [b]3.网络编程[/b](python网络编程相当简单)
  (1)Foundations of Python Network Programming 2nd
  [b]4.系统管理[/b](主要用于unix/linux,其ipython shell功能强大)
  (1)Python UNIX和Linux系统管理指南
  [b]5.科学计算[/b](主要包括numpy,matplot,pil等扩展包,它的功能可代替庞大的matlab)
  (1)Matplotlib for Python Developers
  (2)A Primer on Scientific Programming with Python 2nd
  (3)Python.Scripting.for.Computational.Science.3rd.Edition
  (4)集体智慧编程(中文版)
  [b]6.自然语言处理[/b](搜索)
  (1)Python Text Processing with NLTK 2.0 Cookbook
  (2)Python 分析处理自然语言[中文版]
  无论你是做软件开发,还是做学术研究,抑或纯粹的计算机爱好者,这门语言怎能错过!当然python涉及领域远不止于此,只是本人视野有限,其强大的功能还有待挖掘!
  以上资料本人均有电子版,需要可联系(email:shenfen.kuang@gmail.com)!
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Preface Natural Language Processing is used everywhere—in search engines, spell checkers, mobile phones, computer games, and even in your washing machine. Python's Natural Language Toolkit (NLTK) suite of libraries has rapidly emerged as one of the most efficient tools for Natural Language Processing. You want to employ nothing less than the best techniques in Natural Language Processing—and this book is your answer. Python Text Processing with NLTK 2.0 Cookbook is your handy and illustrative guide, which will walk you through all the Natural Language Processing techniques in a step-by-step manner. It will demystify the advanced features of text analysis and text mining using the comprehensive NLTK suite. This book cuts short the preamble and lets you dive right into the science of text processing with a practical hands-on approach. Get started off with learning tokenization of text. Receive an overview of WordNet and how to use it. Learn the basics as well as advanced features of stemming and lemmatization. Discover various ways to replace words with simpler and more common (read: more searched) variants. Create your own corpora and learn to create custom corpus readers for data stored in MongoDB. Use and manipulate POS taggers. Transform and normalize parsed chunks to produce a canonical form without changing their meaning. Dig into feature extraction and text classification. Learn how to easily handle huge amounts of data without any loss in efficiency or speed. This book will teach you all that and beyond, in a hands-on learn-by-doing manner. Make yourself an expert in using the NLTK for Natural Language Processing with this handy companion. Preface 2 What this book covers Chapter 1, Tokenizing Text and WordNet Basics, covers the basics of tokenizing text and using WordNet. Chapter 2, Replacing and Correcting Words, discusses various word replacement and correction techniques. The recipes cover the gamut of linguistic compression, spelling correction, and text normalization. Chapter 3, Creating Custom Corpora, covers how to use corpus readers and create custom corpora. At the same time, it explains how to use the existing corpus data that comes with NLTK. Chapter 4, Part-of-Speech Tagging, explains the process of converting a sentence, in the form of a list of words, into a list of tuples. It also explains taggers, which are trainable. Chapter 5, Extracting Chunks, explains the process of extracting short phrases from a part-of-speech tagged sentence. It uses Penn Treebank corpus for basic training and testing chunk extraction, and the CoNLL 2000 corpus as it has a simpler and more flexible format that supports multiple chunk types. Chapter 6, Transforming Chunks and Trees, shows you how to do various transforms on both chunks and trees. The functions detailed in these recipes modify data, as opposed to learning from it. Chapter 7, Text Classification, describes a way to categorize documents or pieces of text and, by examining the word usage in a piece of text, classifiers decide what class label should be assigned to it. Chapter 8, Distributed Processing and Handling Large Datasets, discusses how to use execnet to do parallel and distributed processing with NLTK. It also explains how to use the Redis data structure server/database to store frequency distributions. Chapter 9, Parsing Specific Data, covers parsing specific kinds of data, focusing primarily on dates, times, and HTML. Appendix, Penn Treebank Part-of-Speech Tags, lists a table of all the part-of-speech tags that occur in the treebank corpus distributed with NLTK.
http://www.amazon.com/Python-Text-Processing-NLTK-Cookbook/dp/1782167854/ Paperback: 310 pages Publisher: Packt Publishing - ebooks Account (August 26, 2014) Language: English Over 80 practical recipes on natural language processing techniques using Python's NLTK 3.0 About This Book Break text down into its component parts for spelling correction, feature extraction, and phrase transformation Learn how to do custom sentiment analysis and named entity recognition Work through the natural language processing concepts with simple and easy-to-follow programming recipes Who This Book Is For This book is intended for Python programmers interested in learning how to do natural language processing. Maybe you've learned the limits of regular expressions the hard way, or you've realized that human language cannot be deterministically parsed like a computer language. Perhaps you have more text than you know what to do with, and need automated ways to analyze and structure that text. This Cookbook will show you how to train and use statistical language models to process text in ways that are practically impossible with standard programming tools. A basic knowledge of Python and the basic text processing concepts is expected. Some experience with regular expressions will also be helpful. In Detail This book will show you the essential techniques of text and language processing. Starting with tokenization, stemming, and the WordNet dictionary, you'll progress to part-of-speech tagging, phrase chunking, and named entity recognition. You'll learn how various text corpora are organized, as well as how to create your own custom corpus. Then, you'll move onto text classification with a focus on sentiment analysis. And because NLP can be computationally expensive on large bodies of text, you'll try a few methods for distributed text processing. Finally, you'll be introduced to a number of other small but complementary Python libraries for text analysis, cleaning, and parsing. This cookbook provides simple, straightforward examples so you can quickly learn text processing with Python and NLTK.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值