Compare commits

..

No commits in common. "master" and "v0.1.0" have entirely different histories.

105 changed files with 1655 additions and 7933 deletions

View File

@ -1,2 +1,6 @@
[run]
branch = True
source = camelot
include = */camelot/*
omit =
*/setup.py

1
.github/FUNDING.yml vendored
View File

@ -1 +0,0 @@
open_collective: camelot

View File

@ -1,57 +0,0 @@
---
name: Bug report
about: Please follow this template to submit bug reports.
title: ''
labels: bug
assignees: ''
---
<!-- Please read the filing issues section of the contributor's guide first: https://camelot-py.readthedocs.io/en/master/dev/contributing.html -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**Steps to reproduce the bug**
<!-- Steps used to install `camelot`:
1. Add step here (you can add more steps too) -->
<!-- Steps to be used to reproduce behavior:
1. Add step here (you can add more steps too) -->
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Code**
<!-- Add the Camelot code snippet that you used. -->
```
import camelot
# add your code here
```
**PDF**
<!-- Add the PDF file that you want to extract tables from. -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
**Environment**
- OS: [e.g. macOS]
- Python version:
- Numpy version:
- OpenCV version:
- Ghostscript version:
- Camelot version:
**Additional context**
<!-- Add any other context about the problem here. -->

View File

@ -1,44 +0,0 @@
name: tests
on: [pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install camelot with dependencies
run: |
make install
- name: Test with pytest
run: |
make test
test_latest:
name: Test on ${{ matrix.os }} with Python 3.9
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.9]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install camelot with dependencies
run: |
make install
- name: Test with pytest
run: |
make test

9
.gitignore vendored
View File

@ -1,4 +1,3 @@
fontconfig/
__pycache__/
*.py[cod]
*.so
@ -6,15 +5,7 @@ __pycache__/
build/
dist/
*.egg-info/
.eggs/
.coverage
coverage.xml
.pytest_cache/
_build/
.venv/
htmlcov/
# vscode
.vscode

View File

@ -1,27 +0,0 @@
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# Build documentation with MkDocs
#mkdocs:
# configuration: mkdocs.yml
# Optionally build your docs in additional formats such as PDF
formats:
- pdf
# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.8
install:
- method: pip
path: .
extra_requirements:
- dev

View File

@ -2,7 +2,7 @@
If you're reading this, you're probably looking to contributing to Camelot. *Time is the only real currency*, and the fact that you're considering spending some here is *very* generous of you. Thank you very much!
This document will help you get started with contributing documentation, code, testing and filing issues. If you have any questions, feel free to reach out to [Vinayak Mehta](https://vinayak-mehta.github.io), the author and maintainer.
This document will help you get started with contributing documentation, code, testing and filing issues. If you have any questions, feel free to reach out to [Vinayak Mehta](http://vinayak-mehta.github.io), the author and maintainer.
## Code Of Conduct
@ -14,29 +14,31 @@ Kenneth Reitz has also written an [essay](https://www.kennethreitz.org/essays/be
As the [Requests Code Of Conduct](http://docs.python-requests.org/en/master/dev/contributing/#be-cordial) states, **all contributions are welcome**, as long as everyone involved is treated with respect.
## Your first contribution
## Your First Contribution
A great way to start contributing to Camelot is to pick an issue tagged with the [help wanted](https://github.com/camelot-dev/camelot/labels/help%20wanted) tag or the [good first issue](https://github.com/camelot-dev/camelot/labels/good%20first%20issue) tag. If you're unable to find a good first issue, feel free to contact the maintainer.
A great way to start contributing to Camelot is to pick an issue tagged with the [Contributor Friendly](https://github.com/socialcopsdev/camelot/labels/Contributor%20Friendly) tag or the [Level: Easy](https://github.com/socialcopsdev/camelot/labels/Level%3A%20Easy) tag. If you're unable to find a good first issue, feel free to contact the maintainer.
## Setting up a development environment
To install the dependencies needed for development, you can use pip:
<pre>
$ pip install "camelot-py[dev]"
$ pip install camelot-py[dev]
</pre>
Alternatively, you can clone the project repository, and install using pip:
### Alternatively
You can clone the project repository, and install using pip:
<pre>
$ pip install ".[dev]"
$ pip install .[dev]
</pre>
## Pull Requests
### Submit a pull request
### Submit a Pull Request
The preferred workflow for contributing to Camelot is to fork the [project repository](https://github.com/camelot-dev/camelot) on GitHub, clone, develop on a branch and then finally submit a pull request. Here are the steps:
The preferred workflow for contributing to Camelot is to fork the [project repository](https://github.com/socialcopsdev/camelot) on GitHub, clone, develop on a branch and then finally submit a pull request. Steps:
1. Fork the project repository. Click on the Fork button near the top of the page. This creates a copy of the code under your account on the GitHub.
@ -71,7 +73,7 @@ $ git push -u origin my-feature
Now it's time to go to the your fork of Camelot and create a pull request! You can [follow these instructions](https://help.github.com/articles/creating-a-pull-request-from-a-fork/) to do this.
### Work on your pull request
### Work on your Pull Request
We recommend that your pull request complies with the following rules:
@ -79,7 +81,7 @@ We recommend that your pull request complies with the following rules:
- In case your pull request contains function docstrings, make sure you follow the [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) format. All function docstrings in Camelot follow this format. Moreover, following the format will make sure that the API documentation is generated flawlessly.
- Make sure your commit messages follow [the seven rules of a great git commit message](https://chris.beams.io/posts/git-commit/):
- Make sure your commit messages follow [the seven rules of a great git commit message](https://chris.beams.io/posts/git-commit/).
- Separate subject from body with a blank line
- Limit the subject line to 50 characters
- Capitalize the subject line
@ -102,15 +104,15 @@ Writing documentation, function docstrings, examples and tutorials is a great wa
It is written in [reStructuredText](https://en.wikipedia.org/wiki/ReStructuredText), with [Sphinx](http://www.sphinx-doc.org/en/master/) used to generate these lovely HTML files that you're currently reading (unless you're reading this on GitHub). You can edit the documentation using any text editor and then generate the HTML output by running `make html` in the `docs/` directory.
The function docstrings are written using the [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) extension for Sphinx. Make sure you check out its format guidelines before you start writing one.
The function docstrings are written using the [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) extension for Sphinx. Make sure you check out its format guidelines, before you start writing one.
## Filing Issues
We use [GitHub issues](https://github.com/camelot-dev/camelot/issues) to keep track of all issues and pull requests. Before opening an issue (which asks a question or reports a bug), please use GitHub search to look for existing issues (both open and closed) that may be similar.
We use [GitHub issues](https://docs.pytest.org/en/latest/) to keep track of all issues and pull requests. Before opening an issue (which asks a question or reports a bug), it is advisable to use GitHub search to look for existing issues (both open and closed) that may be similar.
### Questions
Please don't use GitHub issues for support questions. A better place for them would be [Stack Overflow](http://stackoverflow.com). Make sure you tag them using the `python-camelot` tag.
Please don't use GitHub issues for support questions, a better place for them would be [Stack Overflow](http://stackoverflow.com). Make sure you tag them using the `python-camelot` tag.
### Bug Reports

View File

@ -1,281 +0,0 @@
Release History
===============
master
------
0.10.1 (2021-07-11)
------------------
- Change extra requirements from `cv` to `base`. You can use `pip install "camelot-py[base]"` to install everything required to run camelot.
0.10.0 (2021-07-11)
------------------
**Improvements**
- Add support for multiple image conversion backends. [#198](https://github.com/camelot-dev/camelot/pull/198) and [#253](https://github.com/camelot-dev/camelot/pull/253) by Vinayak Mehta.
- Add markdown export format. [#222](https://github.com/camelot-dev/camelot/pull/222/) by [Lucas Cimon](https://github.com/Lucas-C).
**Documentation**
- Add faq section. [#216](https://github.com/camelot-dev/camelot/pull/216) by [Stefano Fiorucci](https://github.com/anakin87).
0.9.0 (2021-06-15)
------------------
**Bugfixes**
- Fix use of resolution argument to generate image with ghostscript. [#231](https://github.com/camelot-dev/camelot/pull/231) by [Tiago Samaha Cordeiro](https://github.com/tiagosamaha).
- [#15](https://github.com/camelot-dev/camelot/issues/15) Fix duplicate strings being assigned to the same cell. [#206](https://github.com/camelot-dev/camelot/pull/206) by [Eduardo Gonzalez Lopez de Murillas](https://github.com/edugonza).
- Save plot when filename is specified. [#121](https://github.com/camelot-dev/camelot/pull/121) by [Jens Diemer](https://github.com/jedie).
- Close file streams explicitly. [#202](https://github.com/camelot-dev/camelot/pull/202) by [Martin Abente Lahaye](https://github.com/tchx84).
- Use correct re.sub signature. [#186](https://github.com/camelot-dev/camelot/pull/186) by [pevisscher](https://github.com/pevisscher).
- [#183](https://github.com/camelot-dev/camelot/issues/183) Fix UnicodeEncodeError when using Stream flavor by adding encoding kwarg to `to_html`. [#188](https://github.com/camelot-dev/camelot/pull/188) by [Stefano Fiorucci](https://github.com/anakin87).
- [#179](https://github.com/camelot-dev/camelot/issues/179) Fix `max() arg is an empty sequence` error on PDFs with blank pages. [#189](https://github.com/camelot-dev/camelot/pull/189) by Vinayak Mehta.
**Improvements**
- Add `line_overlap` and `boxes_flow` to `LAParams`. [#219](https://github.com/camelot-dev/camelot/pull/219) by [Arnie97](https://github.com/Arnie97).
- [Add bug report template.](https://github.com/camelot-dev/camelot/commit/0a3944e54d133b701edfe9c7546ff11289301ba8)
- Move from [Travis to GitHub Actions](https://github.com/camelot-dev/camelot/pull/241).
- Update `.readthedocs.yml` and [remove requirements.txt](https://github.com/camelot-dev/camelot/commit/7ab5db39d07baa4063f975e9e00f6073340e04c1#diff-cde814ef2f549dc093f5b8fc533b7e8f47e7b32a8081e0760e57d5c25a1139d9)
**Documentation**
- [#193](https://github.com/camelot-dev/camelot/issues/193) Add better checks to confirm proper installation of ghostscript. [#196](https://github.com/camelot-dev/camelot/pull/196) by [jimhall](https://github.com/jimhall).
- Update `advanced.rst` plotting examples. [#119](https://github.com/camelot-dev/camelot/pull/119) by [Jens Diemer](https://github.com/jedie).
0.8.2 (2020-07-27)
------------------
* Revert the changes in `0.8.1`.
0.8.1 (2020-07-21)
------------------
**Bugfixes**
* [#169](https://github.com/camelot-dev/camelot/issues/169) Fix import error caused by `pdfminer.six==20200720`. [#171](https://github.com/camelot-dev/camelot/pull/171) by Vinayak Mehta.
0.8.0 (2020-05-24)
------------------
**Improvements**
* Drop Python 2 support!
* Remove Python 2.7 and 3.5 support.
* Replace all instances of `.format` with f-strings.
* Remove all `__future__` imports.
* Fix HTTP 403 forbidden exception in read_pdf(url) and remove Python 2 urllib support.
* Fix test data.
**Bugfixes**
* Fix library discovery on Windows. [#32](https://github.com/camelot-dev/camelot/pull/32) by [KOLANICH](https://github.com/KOLANICH).
* Fix calling convention of callback functions. [#34](https://github.com/camelot-dev/camelot/pull/34) by [KOLANICH](https://github.com/KOLANICH).
0.7.3 (2019-07-07)
------------------
**Improvements**
* Camelot now follows the Black code style! [#1](https://github.com/camelot-dev/camelot/pull/1) and [#3](https://github.com/camelot-dev/camelot/pull/3).
**Bugfixes**
* Fix Click.HelpFormatter monkey-patch. [#5](https://github.com/camelot-dev/camelot/pull/5) by [Dimiter Naydenov](https://github.com/dimitern).
* Fix strip_text argument getting ignored. [#4](https://github.com/camelot-dev/camelot/pull/4) by [Dimiter Naydenov](https://github.com/dimitern).
* [#25](https://github.com/camelot-dev/camelot/issues/25) edge_tol skipped in read_pdf. [#26](https://github.com/camelot-dev/camelot/pull/26) by Vinayak Mehta.
* Fix pytest deprecation warning. [#2](https://github.com/camelot-dev/camelot/pull/2) by Vinayak Mehta.
* [#293](https://github.com/socialcopsdev/camelot/issues/293) Split text ignores all text to the right of last cut. [#294](https://github.com/socialcopsdev/camelot/pull/294) by Vinayak Mehta.
* [#277](https://github.com/socialcopsdev/camelot/issues/277) Sort TableList by order of tables in PDF. [#283](https://github.com/socialcopsdev/camelot/pull/283) by [Sym Roe](https://github.com/symroe).
* [#312](https://github.com/socialcopsdev/camelot/issues/312) `table_regions` throws `ValueError` when `flavor='stream'`. [#332](https://github.com/socialcopsdev/camelot/pull/332) by Vinayak Mehta.
0.7.2 (2019-01-10)
------------------
**Bugfixes**
* [#245](https://github.com/socialcopsdev/camelot/issues/245) Fix AttributeError for encrypted files. [#251](https://github.com/socialcopsdev/camelot/pull/251) by Yatin Taluja.
0.7.1 (2019-01-06)
------------------
**Bugfixes**
* Move ghostscript import to inside the function so Anaconda builds don't fail.
0.7.0 (2019-01-05)
------------------
**Improvements**
* [#240](https://github.com/socialcopsdev/camelot/issues/209) Add support to analyze only certain page regions to look for tables. [#243](https://github.com/socialcopsdev/camelot/pull/243) by Vinayak Mehta.
* You can use `table_regions` in `read_pdf()` to specify approximate page regions which may contain tables.
* Kwarg `line_size_scaling` is now called `line_scale`.
* [#212](https://github.com/socialcopsdev/camelot/issues/212) Add support to export as sqlite database. [#244](https://github.com/socialcopsdev/camelot/pull/244) by Vinayak Mehta.
* [#239](https://github.com/socialcopsdev/camelot/issues/239) Raise warning if PDF is image-based. [#240](https://github.com/socialcopsdev/camelot/pull/240) by Vinayak Mehta.
**Documentation**
* Remove mention of old mesh kwarg from docs. [#241](https://github.com/socialcopsdev/camelot/pull/241) by [fte10kso](https://github.com/fte10kso).
**Note**: The python wrapper to Ghostscript's C API is now vendorized under the `ext` module. This was done due to unavailability of the [ghostscript](https://pypi.org/project/ghostscript/) package on Anaconda. The code should be removed after we submit a recipe for it to conda-forge. With this release, the user doesn't need to ensure that the Ghostscript executable is available on the PATH variable.
0.6.0 (2018-12-24)
------------------
**Improvements**
* [#91](https://github.com/socialcopsdev/camelot/issues/91) Add support to read from url. [#236](https://github.com/socialcopsdev/camelot/pull/236) by Vinayak Mehta.
* [#229](https://github.com/socialcopsdev/camelot/issues/229), [#230](https://github.com/socialcopsdev/camelot/issues/230) and [#233](https://github.com/socialcopsdev/camelot/issues/233) New configuration parameters. [#234](https://github.com/socialcopsdev/camelot/pull/234) by Vinayak Mehta.
* `strip_text`: To define characters that should be stripped from each string.
* `edge_tol`: Tolerance parameter for extending textedges vertically.
* `resolution`: Resolution used for PDF to PNG conversion.
* Check out the [advanced docs](https://camelot-py.readthedocs.io/en/master/user/advanced.html#strip-characters-from-text) for usage details.
* [#170](https://github.com/socialcopsdev/camelot/issues/170) Add option to pass pdfminer layout kwargs. [#232](https://github.com/socialcopsdev/camelot/pull/232) by Vinayak Mehta.
* Keyword arguments for [pdfminer.layout.LAParams](https://github.com/euske/pdfminer/blob/master/pdfminer/layout.py#L33) can now be passed using `layout_kwargs` in `read_pdf()`.
* The `margins` keyword argument in `read_pdf()` is now deprecated.
0.5.0 (2018-12-13)
------------------
**Improvements**
* [#207](https://github.com/socialcopsdev/camelot/issues/207) Add a plot type for Stream text edges and detected table areas. [#224](https://github.com/socialcopsdev/camelot/pull/224) by Vinayak Mehta.
* [#204](https://github.com/socialcopsdev/camelot/issues/204) `suppress_warnings` is now called `suppress_stdout`. [#225](https://github.com/socialcopsdev/camelot/pull/225) by Vinayak Mehta.
**Bugfixes**
* [#217](https://github.com/socialcopsdev/camelot/issues/217) Fix IndexError when scale is large.
* [#105](https://github.com/socialcopsdev/camelot/issues/105), [#192](https://github.com/socialcopsdev/camelot/issues/192) and [#215](https://github.com/socialcopsdev/camelot/issues/215) in [#227](https://github.com/socialcopsdev/camelot/pull/227) by Vinayak Mehta.
**Documentation**
* Add pdfplumber comparison and update Tabula (stream) comparison. Check out the [wiki page](https://github.com/socialcopsdev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools).
0.4.1 (2018-12-05)
------------------
**Bugfixes**
* Add chardet to `install_requires` to fix [#210](https://github.com/socialcopsdev/camelot/issues/210). More details in [pdfminer.six#213](https://github.com/pdfminer/pdfminer.six/issues/213).
0.4.0 (2018-11-23)
------------------
**Improvements**
* [#102](https://github.com/socialcopsdev/camelot/issues/102) Detect tables automatically when Stream is used. [#206](https://github.com/socialcopsdev/camelot/pull/206) Add implementation of Anssi Nurminen's table detection algorithm by Vinayak Mehta.
0.3.2 (2018-11-04)
------------------
**Improvements**
* [#186](https://github.com/socialcopsdev/camelot/issues/186) Add `_bbox` attribute to table. [#193](https://github.com/socialcopsdev/camelot/pull/193) by Vinayak Mehta.
* You can use `table._bbox` to get coordinates of the detected table.
0.3.1 (2018-11-02)
------------------
**Improvements**
* Matplotlib is now an optional requirement. [#190](https://github.com/socialcopsdev/camelot/pull/190) by Vinayak Mehta.
* You can install it using `$ pip install camelot-py[plot]`.
* [#127](https://github.com/socialcopsdev/camelot/issues/127) Add tests for plotting. Coverage is now at 87%! [#179](https://github.com/socialcopsdev/camelot/pull/179) by [Suyash Behera](https://github.com/Suyash458).
0.3.0 (2018-10-28)
------------------
**Improvements**
* [#162](https://github.com/socialcopsdev/camelot/issues/162) Add password keyword argument. [#180](https://github.com/socialcopsdev/camelot/pull/180) by [rbares](https://github.com/rbares).
* An encrypted PDF can now be decrypted by passing `password='<PASSWORD>'` to `read_pdf` or `--password <PASSWORD>` to the command-line interface. (Limited encryption algorithm support from PyPDF2.)
* [#139](https://github.com/socialcopsdev/camelot/issues/139) Add suppress_warnings keyword argument. [#155](https://github.com/socialcopsdev/camelot/pull/155) by [Jonathan Lloyd](https://github.com/jonathanlloyd).
* Warnings raised by Camelot can now be suppressed by passing `suppress_warnings=True` to `read_pdf` or `--quiet` to the command-line interface.
* [#154](https://github.com/socialcopsdev/camelot/issues/154) The CLI can now be run using `python -m`. Try `python -m camelot --help`. [#159](https://github.com/socialcopsdev/camelot/pull/159) by [Parth P Panchal](https://github.com/pqrth).
* [#165](https://github.com/socialcopsdev/camelot/issues/165) Rename `table_area` to `table_areas`. [#171](https://github.com/socialcopsdev/camelot/pull/171) by [Parth P Panchal](https://github.com/pqrth).
**Bugfixes**
* Raise error if the ghostscript executable is not on the PATH variable. [#166](https://github.com/socialcopsdev/camelot/pull/166) by Vinayak Mehta.
* Convert filename to lowercase to check for PDF extension. [#169](https://github.com/socialcopsdev/camelot/pull/169) by [Vinicius Mesel](https://github.com/vmesel).
**Files**
* [#114](https://github.com/socialcopsdev/camelot/issues/114) Add Makefile and make codecov run only once. [#132](https://github.com/socialcopsdev/camelot/pull/132) by [Vaibhav Mule](https://github.com/vaibhavmule).
* Add .editorconfig. [#151](https://github.com/socialcopsdev/camelot/pull/151) by [KOLANICH](https://github.com/KOLANICH).
* Downgrade numpy version from 1.15.2 to 1.13.3.
* Add requirements.txt for readthedocs.
**Documentation**
* Add "Using conda" section to installation instructions.
* Add readthedocs badge.
0.2.3 (2018-10-08)
------------------
* Remove hard dependencies on requirements versions.
0.2.2 (2018-10-08)
------------------
**Bugfixes**
* Move opencv-python to extra\_requires. [#134](https://github.com/socialcopsdev/camelot/pull/134) by Vinayak Mehta.
0.2.1 (2018-10-05)
------------------
**Bugfixes**
* [#121](https://github.com/socialcopsdev/camelot/issues/121) Fix ghostscript subprocess call for Windows. [#124](https://github.com/socialcopsdev/camelot/pull/124) by Vinayak Mehta.
**Improvements**
* [#123](https://github.com/socialcopsdev/camelot/issues/123) Make PEP8 compatible. [#125](https://github.com/socialcopsdev/camelot/pull/125) by [Oshawk](https://github.com/Oshawk).
* [#110](https://github.com/socialcopsdev/camelot/issues/110) Add more tests. Coverage is now at 84%!
* Add tests for `__repr__`. [#128](https://github.com/socialcopsdev/camelot/pull/128) by [Vaibhav Mule](https://github.com/vaibhavmule).
* Add tests for CLI. [#122](https://github.com/socialcopsdev/camelot/pull/122) by [Vaibhav Mule](https://github.com/vaibhavmule) and [#117](https://github.com/socialcopsdev/camelot/pull/117) by Vinayak Mehta.
* Add tests for errors/warnings. [#113](https://github.com/socialcopsdev/camelot/pull/113) by Vinayak Mehta.
* Add tests for output formats and parser kwargs. [#126](https://github.com/socialcopsdev/camelot/pull/126) by Vinayak Mehta.
* Add Python 3.5 and 3.7 support. [#119](https://github.com/socialcopsdev/camelot/pull/119) by Vinayak Mehta.
* Add logging and warnings.
**Documentation**
* Copyedit all documentation. [#112](https://github.com/socialcopsdev/camelot/pull/112) by [Christine Garcia](https://github.com/christinegarcia).
* [#115](https://github.com/socialcopsdev/camelot/issues/115) Update issue labels in contributor's guide. [#116](https://github.com/socialcopsdev/camelot/pull/116) by [Johnny Metz](https://github.com/johnnymetz).
* Update installation instructions for Windows. [#124](https://github.com/socialcopsdev/camelot/pull/124) by Vinayak Mehta.
**Note**: This release also bumps the version for numpy from 1.13.3 to 1.15.2 and adds a MANIFEST.in. Also, openpyxl==2.5.8 is a new requirement and pytest-cov==2.6.0 is a new dev requirement.
0.2.0 (2018-09-28)
------------------
**Improvements**
* [#81](https://github.com/socialcopsdev/camelot/issues/81) Add Python 3.6 support. [#109](https://github.com/socialcopsdev/camelot/pull/109) by Vinayak Mehta.
0.1.2 (2018-09-25)
------------------
**Improvements**
* [#85](https://github.com/socialcopsdev/camelot/issues/85) Add Travis and Codecov.
0.1.1 (2018-09-24)
------------------
**Documentation**
* Add documentation fixes.
0.1.0 (2018-09-24)
------------------
* Rebirth!

View File

@ -1,7 +1,6 @@
MIT License
Copyright (c) 2019-2021 Camelot Developers
Copyright (c) 2018-2019 Peeply Private Ltd (Singapore)
Copyright (c) 2018 Peeply Private Ltd (Singapore)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@ -1 +0,0 @@
include MANIFEST.in README.md HISTORY.md LICENSE setup.py setup.cfg

View File

@ -1,28 +0,0 @@
.PHONY: docs
INSTALL :=
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux)
INSTALL := @sudo apt install python-tk python3-tk ghostscript
else ifeq ($(UNAME_S),Darwin)
INSTALL := @brew install tcl-tk ghostscript
else
INSTALL := @echo "Please install tk and ghostscript"
endif
install:
$(INSTALL)
pip install --upgrade pip
pip install ".[dev]"
test:
pytest --verbose --cov-config .coveragerc --cov-report term --cov-report xml --cov=camelot --mpl
docs:
cd docs && make html
@echo "\033[95m\n\nBuild successful! View the docs homepage at docs/_build/html/index.html.\n\033[0m"
publish:
pip install twine
python setup.py sdist
twine upload dist/*
rm -fr build dist .egg camelot_py.egg-info

View File

@ -1,28 +1,21 @@
<p align="center">
<img src="https://raw.githubusercontent.com/camelot-dev/camelot/master/docs/_static/camelot.png" width="200">
</p>
# Camelot: PDF Table Extraction for Humans
[![tests](https://github.com/camelot-dev/camelot/actions/workflows/tests.yml/badge.svg)](https://github.com/camelot-dev/camelot/actions/workflows/tests.yml) [![Documentation Status](https://readthedocs.org/projects/camelot-py/badge/?version=master)](https://camelot-py.readthedocs.io/en/master/)
[![codecov.io](https://codecov.io/github/camelot-dev/camelot/badge.svg?branch=master&service=github)](https://codecov.io/github/camelot-dev/camelot?branch=master)
[![image](https://img.shields.io/pypi/v/camelot-py.svg)](https://pypi.org/project/camelot-py/) [![image](https://img.shields.io/pypi/l/camelot-py.svg)](https://pypi.org/project/camelot-py/) [![image](https://img.shields.io/pypi/pyversions/camelot-py.svg)](https://pypi.org/project/camelot-py/) [![Gitter chat](https://badges.gitter.im/camelot-dev/Lobby.png)](https://gitter.im/camelot-dev/Lobby)
[![image](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black)
![license](https://img.shields.io/badge/license-MIT-lightgrey.svg) ![python-version](https://img.shields.io/badge/python-2.7-blue.svg)
**Camelot** is a Python library that can help you extract tables from PDFs!
**Camelot** is a Python library which makes it easy for *anyone* to extract tables from PDF files!
**Note:** You can also check out [Excalibur](https://github.com/camelot-dev/excalibur), the web interface to Camelot!
![camelot-logo](docs/_static/png/camelot-logo.png)
---
**Here's how you can extract tables from PDFs.** You can check out the PDF used in this example [here](https://github.com/camelot-dev/camelot/blob/master/docs/_static/pdf/foo.pdf).
**Here's how you can extract tables from PDF files.** Check out the PDF used in this example, [here](docs/_static/pdf/foo.pdf).
<pre>
>>> import camelot
>>> tables = camelot.read_pdf('foo.pdf')
>>> tables
&lt;TableList n=1&gt;
>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html, markdown, sqlite
&lt;TableList tables=1&gt;
>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html
>>> tables[0]
&lt;Table shape=(7, 7)&gt;
>>> tables[0].parsing_report
@ -32,7 +25,7 @@
'order': 1,
'page': 1
}
>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html, to_markdown, to_sqlite
>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html
>>> tables[0].df # get a pandas DataFrame!
</pre>
@ -45,73 +38,78 @@
| 2032_2 | 0.17 | 57.8 | 21.7% | 0.3% | 2.7% | 1.2% |
| 4171_1 | 0.07 | 173.9 | 58.1% | 1.6% | 2.1% | 0.5% |
Camelot also comes packaged with a [command-line interface](https://camelot-py.readthedocs.io/en/master/user/cli.html)!
**Note:** Camelot only works with text-based PDFs and not scanned documents. (As Tabula [explains](https://github.com/tabulapdf/tabula#why-tabula), "If you can click and drag to select text in your table in a PDF viewer, then your PDF is text-based".)
You can check out some frequently asked questions [here](https://camelot-py.readthedocs.io/en/master/user/faq.html).
There's a [command-line interface](http://camelot-py.readthedocs.io/en/master/user/cli.html) too!
## Why Camelot?
- **Configurability**: Camelot gives you control over the table extraction process with [tweakable settings](https://camelot-py.readthedocs.io/en/master/user/advanced.html).
- **Metrics**: You can discard bad tables based on metrics like accuracy and whitespace, without having to manually look at each table.
- **Output**: Each table is extracted into a **pandas DataFrame**, which seamlessly integrates into [ETL and data analysis workflows](https://gist.github.com/vinayak-mehta/e5949f7c2410a0e12f25d3682dc9e873). You can also export tables to multiple formats, which include CSV, JSON, Excel, HTML, Markdown, and Sqlite.
- **You are in control**: Unlike other libraries and tools which either give a nice output or fail miserably (with no in-between), Camelot gives you the power to tweak table extraction. (Since everything in the real world, including PDF table extraction, is fuzzy.)
- **Metrics**: *Bad* tables can be discarded based on metrics like accuracy and whitespace, without ever having to manually look at each table.
- Each table is a **pandas DataFrame**, which enables seamless integration into [ETL and data analysis workflows](https://gist.github.com/vinayak-mehta/e5949f7c2410a0e12f25d3682dc9e873).
- **Export** to multiple formats, including json, excel and html.
See [comparison with similar libraries and tools](https://github.com/camelot-dev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools).
## Support the development
If Camelot has helped you, please consider supporting its development with a one-time or monthly donation [on OpenCollective](https://opencollective.com/camelot).
See [comparison with other PDF table extraction libraries and tools](https://github.com/socialcopsdev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools).
## Installation
### Using conda
The easiest way to install Camelot is with [conda](https://conda.io/docs/), which is a package manager and environment management system for the [Anaconda](http://docs.continuum.io/anaconda/) distribution.
After [installing the dependencies](http://camelot-py.readthedocs.io/en/master/user/install.html), [tk](https://packages.ubuntu.com/trusty/python-tk) and [ghostscript](https://www.ghostscript.com/), you can simply use pip to install Camelot:
<pre>
$ conda install -c conda-forge camelot-py
$ pip install camelot-py
</pre>
### Using pip
### Alternatively
After [installing the dependencies](https://camelot-py.readthedocs.io/en/master/user/install-deps.html) ([tk](https://packages.ubuntu.com/bionic/python/python-tk) and [ghostscript](https://www.ghostscript.com/)), you can also just use pip to install Camelot:
After [installing the dependencies](http://camelot-py.readthedocs.io/en/master/user/install.html), clone the repo using:
<pre>
$ pip install "camelot-py[base]"
</pre>
### From the source code
After [installing the dependencies](https://camelot-py.readthedocs.io/en/master/user/install.html#using-pip), clone the repo using:
<pre>
$ git clone https://www.github.com/camelot-dev/camelot
$ git clone https://www.github.com/socialcopsdev/camelot
</pre>
and install Camelot using pip:
<pre>
$ cd camelot
$ pip install ".[base]"
$ pip install .
</pre>
Note: Use a [virtualenv](https://virtualenv.pypa.io/en/stable/) if you don't want to affect your global Python installation.
## Documentation
The documentation is available at [http://camelot-py.readthedocs.io/](http://camelot-py.readthedocs.io/).
Great documentation is available at [http://camelot-py.readthedocs.io/](http://camelot-py.readthedocs.io/).
## Wrappers
## Development
- [camelot-php](https://github.com/randomstate/camelot-php) provides a [PHP](https://www.php.net/) wrapper on Camelot.
The [Contributor's Guide](CONTRIBUTING.md) has detailed information about contributing code, documentation, tests and more. We've included some basic information in this README.
## Contributing
### Source code
The [Contributor's Guide](https://camelot-py.readthedocs.io/en/master/dev/contributing.html) has detailed information about contributing issues, documentation, code, and tests.
You can check the latest sources with:
<pre>
$ git clone https://www.github.com/socialcopsdev/camelot
</pre>
### Setting up a development environment
You can install the development dependencies easily, using pip:
<pre>
$ pip install camelot-py[dev]
</pre>
### Testing
After installation, you can run tests using:
<pre>
$ python setup.py test
</pre>
## Versioning
Camelot uses [Semantic Versioning](https://semver.org/). For the available versions, see the tags on this repository. For the changelog, you can check out [HISTORY.md](https://github.com/camelot-dev/camelot/blob/master/HISTORY.md).
Camelot uses [Semantic Versioning](https://semver.org/). For the available versions, see the tags on this repository.
## License
This project is licensed under the MIT License, see the [LICENSE](https://github.com/camelot-dev/camelot/blob/master/LICENSE) file for details.
This project is licensed under the MIT License, see the [LICENSE](LICENSE) file for details.

View File

@ -1,21 +1,5 @@
# -*- coding: utf-8 -*-
import logging
from .__version__ import __version__
from .io import read_pdf
from .plotting import PlotMethods
# set up logging
logger = logging.getLogger("camelot")
format_string = "%(asctime)s - %(levelname)s - %(message)s"
formatter = logging.Formatter(format_string, datefmt="%Y-%m-%dT%H:%M:%S")
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
# instantiate plot method
plot = PlotMethods()
from .io import read_pdf

View File

@ -1,14 +0,0 @@
# -*- coding: utf-8 -*-
__all__ = ("main",)
def main():
from camelot.cli import cli
cli()
if __name__ == "__main__":
main()

View File

@ -1,23 +1,11 @@
# -*- coding: utf-8 -*-
VERSION = (0, 10, 1)
PRERELEASE = None # alpha, beta or rc
REVISION = None
VERSION = (0, 1, 0)
def generate_version(version, prerelease=None, revision=None):
version_parts = [".".join(map(str, version))]
if prerelease is not None:
version_parts.append(f"-{prerelease}")
if revision is not None:
version_parts.append(f".{revision}")
return "".join(version_parts)
__title__ = "camelot-py"
__description__ = "PDF Table Extraction for Humans."
__url__ = "http://camelot-py.readthedocs.io/"
__version__ = generate_version(VERSION, prerelease=PRERELEASE, revision=REVISION)
__author__ = "Vinayak Mehta"
__author_email__ = "vmehta94@gmail.com"
__license__ = "MIT License"
__title__ = 'camelot-py'
__description__ = 'PDF Table Extraction for Humans.'
__url__ = 'http://camelot-py.readthedocs.io/'
__version__ = '.'.join(map(str, VERSION))
__author__ = 'Vinayak Mehta'
__author_email__ = 'vmehta94@gmail.com'
__license__ = 'MIT License'

View File

@ -1,3 +0,0 @@
# -*- coding: utf-8 -*-
from .image_conversion import ImageConversionBackend

View File

@ -1,47 +0,0 @@
# -*- coding: utf-8 -*-
import sys
import ctypes
from ctypes.util import find_library
def installed_posix():
library = find_library("gs")
return library is not None
def installed_windows():
library = find_library(
"".join(("gsdll", str(ctypes.sizeof(ctypes.c_voidp) * 8), ".dll"))
)
return library is not None
class GhostscriptBackend(object):
def installed(self):
if sys.platform in ["linux", "darwin"]:
return installed_posix()
elif sys.platform == "win32":
return installed_windows()
else:
return installed_posix()
def convert(self, pdf_path, png_path, resolution=300):
if not self.installed():
raise OSError(
"Ghostscript is not installed. You can install it using the instructions"
" here: https://camelot-py.readthedocs.io/en/master/user/install-deps.html"
)
import ghostscript
gs_command = [
"gs",
"-q",
"-sDEVICE=png16m",
"-o",
png_path,
f"-r{resolution}",
pdf_path,
]
ghostscript.Ghostscript(*gs_command)

View File

@ -1,40 +0,0 @@
# -*- coding: utf-8 -*-
from .poppler_backend import PopplerBackend
from .ghostscript_backend import GhostscriptBackend
BACKENDS = {"poppler": PopplerBackend, "ghostscript": GhostscriptBackend}
class ImageConversionBackend(object):
def __init__(self, backend="poppler", use_fallback=True):
if backend not in BACKENDS.keys():
raise ValueError(f"Image conversion backend '{backend}' not supported")
self.backend = backend
self.use_fallback = use_fallback
self.fallbacks = list(filter(lambda x: x != backend, BACKENDS.keys()))
def convert(self, pdf_path, png_path):
try:
converter = BACKENDS[self.backend]()
converter.convert(pdf_path, png_path)
except Exception as e:
import sys
if self.use_fallback:
for fallback in self.fallbacks:
try:
converter = BACKENDS[fallback]()
converter.convert(pdf_path, png_path)
except Exception as e:
raise type(e)(
str(e) + f" with image conversion backend '{fallback}'"
).with_traceback(sys.exc_info()[2])
continue
else:
break
else:
raise type(e)(
str(e) + f" with image conversion backend '{self.backend}'"
).with_traceback(sys.exc_info()[2])

View File

@ -1,22 +0,0 @@
# -*- coding: utf-8 -*-
import shutil
import subprocess
class PopplerBackend(object):
def convert(self, pdf_path, png_path):
pdftopng_executable = shutil.which("pdftopng")
if pdftopng_executable is None:
raise OSError(
"pdftopng is not installed. You can install it using the 'pip install pdftopng' command."
)
pdftopng_command = [pdftopng_executable, pdf_path, png_path]
try:
subprocess.check_output(
" ".join(pdftopng_command), stderr=subprocess.STDOUT, shell=True
)
except subprocess.CalledProcessError as e:
raise ValueError(e.output)

View File

@ -1,25 +1,15 @@
# -*- coding: utf-8 -*-
import logging
from pprint import pprint
import click
try:
import matplotlib.pyplot as plt
except ImportError:
_HAS_MPL = False
else:
_HAS_MPL = True
from . import __version__, read_pdf, plot
logger = logging.getLogger("camelot")
logger.setLevel(logging.INFO)
from . import __version__
from .io import read_pdf
class Config(object):
def __init__(self):
def __init__(self):
self.config = {}
def set_config(self, key, value):
@ -29,276 +19,134 @@ class Config(object):
pass_config = click.make_pass_decorator(Config)
@click.group(name="camelot")
@click.group()
@click.version_option(version=__version__)
@click.option("-q", "--quiet", is_flag=False, help="Suppress logs and warnings.")
@click.option(
"-p",
"--pages",
default="1",
help="Comma-separated page numbers." " Example: 1,3,4 or 1,4-end or all.",
)
@click.option("-pw", "--password", help="Password for decryption.")
@click.option("-o", "--output", help="Output file path.")
@click.option(
"-f",
"--format",
type=click.Choice(["csv", "excel", "html", "json", "markdown", "sqlite"]),
help="Output file format.",
)
@click.option("-z", "--zip", is_flag=True, help="Create ZIP archive.")
@click.option(
"-split",
"--split_text",
is_flag=True,
help="Split text that spans across multiple cells.",
)
@click.option(
"-flag",
"--flag_size",
is_flag=True,
help="Flag text based on" " font size. Useful to detect super/subscripts.",
)
@click.option(
"-strip",
"--strip_text",
help="Characters that should be stripped from a string before"
" assigning it to a cell.",
)
@click.option(
"-M",
"--margins",
nargs=3,
default=(1.0, 0.5, 0.1),
help="PDFMiner char_margin, line_margin and word_margin.",
)
@click.option('-p', '--pages', default='1', help='Comma-separated page numbers.'
' Example: 1,3,4 or 1,4-end.')
@click.option('-o', '--output', help='Output file path.')
@click.option('-f', '--format',
type=click.Choice(['csv', 'json', 'excel', 'html']),
help='Output file format.')
@click.option('-z', '--zip', is_flag=True, help='Create ZIP archive.')
@click.option('-split', '--split_text', is_flag=True,
help='Split text that spans across multiple cells.')
@click.option('-flag', '--flag_size', is_flag=True, help='Flag text based on'
' font size. Useful to detect super/subscripts.')
@click.option('-M', '--margins', nargs=3, default=(1.0, 0.5, 0.1),
help='PDFMiner char_margin, line_margin and word_margin.')
@click.pass_context
def cli(ctx, *args, **kwargs):
"""Camelot: PDF Table Extraction for Humans"""
ctx.obj = Config()
for key, value in kwargs.items():
for key, value in kwargs.iteritems():
ctx.obj.set_config(key, value)
@cli.command("lattice")
@click.option(
"-R",
"--table_regions",
default=[],
multiple=True,
help="Page regions to analyze. Example: x1,y1,x2,y2"
" where x1, y1 -> left-top and x2, y2 -> right-bottom.",
)
@click.option(
"-T",
"--table_areas",
default=[],
multiple=True,
help="Table areas to process. Example: x1,y1,x2,y2"
" where x1, y1 -> left-top and x2, y2 -> right-bottom.",
)
@click.option(
"-back", "--process_background", is_flag=True, help="Process background lines."
)
@click.option(
"-scale",
"--line_scale",
default=15,
help="Line size scaling factor. The larger the value,"
" the smaller the detected lines.",
)
@click.option(
"-copy",
"--copy_text",
default=[],
type=click.Choice(["h", "v"]),
multiple=True,
help="Direction in which text in a spanning cell" " will be copied over.",
)
@click.option(
"-shift",
"--shift_text",
default=["l", "t"],
type=click.Choice(["", "l", "r", "t", "b"]),
multiple=True,
help="Direction in which text in a spanning cell will flow.",
)
@click.option(
"-l",
"--line_tol",
default=2,
help="Tolerance parameter used to merge close vertical" " and horizontal lines.",
)
@click.option(
"-j",
"--joint_tol",
default=2,
help="Tolerance parameter used to decide whether"
" the detected lines and points lie close to each other.",
)
@click.option(
"-block",
"--threshold_blocksize",
default=15,
help="For adaptive thresholding, size of a pixel"
" neighborhood that is used to calculate a threshold value for"
" the pixel. Example: 3, 5, 7, and so on.",
)
@click.option(
"-const",
"--threshold_constant",
default=-2,
help="For adaptive thresholding, constant subtracted"
" from the mean or weighted mean. Normally, it is positive but"
" may be zero or negative as well.",
)
@click.option(
"-I",
"--iterations",
default=0,
help="Number of times for erosion/dilation will be applied.",
)
@click.option(
"-res",
"--resolution",
default=300,
help="Resolution used for PDF to PNG conversion.",
)
@click.option(
"-plot",
"--plot_type",
type=click.Choice(["text", "grid", "contour", "joint", "line"]),
help="Plot elements found on PDF page for visual debugging.",
)
@click.argument("filepath", type=click.Path(exists=True))
@cli.command('lattice')
@click.option('-T', '--table_area', default=[], multiple=True,
help='Table areas to process. Example: x1,y1,x2,y2'
' where x1, y1 -> left-top and x2, y2 -> right-bottom.')
@click.option('-back', '--process_background', is_flag=True,
help='Process background lines.')
@click.option('-scale', '--line_size_scaling', default=15,
help='Line size scaling factor. The larger the value,'
' the smaller the detected lines.')
@click.option('-copy', '--copy_text', default=[], type=click.Choice(['h', 'v']),
multiple=True, help='Direction in which text in a spanning cell'
' will be copied over.')
@click.option('-shift', '--shift_text', default=['l', 't'],
type=click.Choice(['', 'l', 'r', 't', 'b']), multiple=True,
help='Direction in which text in a spanning cell will flow.')
@click.option('-l', '--line_close_tol', default=2,
help='Tolerance parameter used to merge close vertical'
' and horizontal lines.')
@click.option('-j', '--joint_close_tol', default=2,
help='Tolerance parameter used to decide whether'
' the detected lines and points lie close to each other.')
@click.option('-block', '--threshold_blocksize', default=15,
help='For adaptive thresholding, size of a pixel'
' neighborhood that is used to calculate a threshold value for'
' the pixel. Example: 3, 5, 7, and so on.')
@click.option('-const', '--threshold_constant', default=-2,
help='For adaptive thresholding, constant subtracted'
' from the mean or weighted mean. Normally, it is positive but'
' may be zero or negative as well.')
@click.option('-I', '--iterations', default=0,
help='Number of times for erosion/dilation will be applied.')
@click.option('-plot', '--plot_type',
type=click.Choice(['text', 'table', 'contour', 'joint', 'line']),
help='Plot geometry found on PDF page, for debugging.')
@click.argument('filepath', type=click.Path(exists=True))
@pass_config
def lattice(c, *args, **kwargs):
"""Use lines between text to parse the table."""
conf = c.config
pages = conf.pop("pages")
output = conf.pop("output")
f = conf.pop("format")
compress = conf.pop("zip")
quiet = conf.pop("quiet")
plot_type = kwargs.pop("plot_type")
filepath = kwargs.pop("filepath")
pages = conf.pop('pages')
output = conf.pop('output')
f = conf.pop('format')
compress = conf.pop('zip')
plot_type = kwargs.pop('plot_type')
filepath = kwargs.pop('filepath')
kwargs.update(conf)
table_regions = list(kwargs["table_regions"])
kwargs["table_regions"] = None if not table_regions else table_regions
table_areas = list(kwargs["table_areas"])
kwargs["table_areas"] = None if not table_areas else table_areas
copy_text = list(kwargs["copy_text"])
kwargs["copy_text"] = None if not copy_text else copy_text
kwargs["shift_text"] = list(kwargs["shift_text"])
table_area = list(kwargs['table_area'])
kwargs['table_area'] = None if not table_area else table_area
copy_text = list(kwargs['copy_text'])
kwargs['copy_text'] = None if not copy_text else copy_text
kwargs['shift_text'] = list(kwargs['shift_text'])
if plot_type is not None:
if not _HAS_MPL:
raise ImportError("matplotlib is required for plotting.")
else:
if output is None:
raise click.UsageError("Please specify output file path using --output")
if f is None:
raise click.UsageError("Please specify output file format using --format")
tables = read_pdf(
filepath, pages=pages, flavor="lattice", suppress_stdout=quiet, **kwargs
)
click.echo(f"Found {tables.n} tables")
tables = read_pdf(filepath, pages=pages, flavor='lattice', **kwargs)
click.echo('Found {} tables'.format(tables.n))
if plot_type is not None:
for table in tables:
plot(table, kind=plot_type)
plt.show()
table.plot(plot_type)
else:
if output is None:
raise click.UsageError('Please specify output file path using --output')
if f is None:
raise click.UsageError('Please specify output file format using --format')
tables.export(output, f=f, compress=compress)
@cli.command("stream")
@click.option(
"-R",
"--table_regions",
default=[],
multiple=True,
help="Page regions to analyze. Example: x1,y1,x2,y2"
" where x1, y1 -> left-top and x2, y2 -> right-bottom.",
)
@click.option(
"-T",
"--table_areas",
default=[],
multiple=True,
help="Table areas to process. Example: x1,y1,x2,y2"
" where x1, y1 -> left-top and x2, y2 -> right-bottom.",
)
@click.option(
"-C",
"--columns",
default=[],
multiple=True,
help="X coordinates of column separators.",
)
@click.option(
"-e",
"--edge_tol",
default=50,
help="Tolerance parameter" " for extending textedges vertically.",
)
@click.option(
"-r",
"--row_tol",
default=2,
help="Tolerance parameter" " used to combine text vertically, to generate rows.",
)
@click.option(
"-c",
"--column_tol",
default=0,
help="Tolerance parameter"
" used to combine text horizontally, to generate columns.",
)
@click.option(
"-plot",
"--plot_type",
type=click.Choice(["text", "grid", "contour", "textedge"]),
help="Plot elements found on PDF page for visual debugging.",
)
@click.argument("filepath", type=click.Path(exists=True))
@cli.command('stream')
@click.option('-T', '--table_area', default=[], multiple=True,
help='Table areas to process. Example: x1,y1,x2,y2'
' where x1, y1 -> left-top and x2, y2 -> right-bottom.')
@click.option('-C', '--columns', default=[], multiple=True,
help='X coordinates of column separators.')
@click.option('-r', '--row_close_tol', default=2, help='Tolerance parameter'
' used to combine text vertically, to generate rows.')
@click.option('-c', '--col_close_tol', default=0, help='Tolerance parameter'
' used to combine text horizontally, to generate columns.')
@click.option('-plot', '--plot_type',
type=click.Choice(['text', 'table']),
help='Plot geometry found on PDF page for debugging.')
@click.argument('filepath', type=click.Path(exists=True))
@pass_config
def stream(c, *args, **kwargs):
"""Use spaces between text to parse the table."""
conf = c.config
pages = conf.pop("pages")
output = conf.pop("output")
f = conf.pop("format")
compress = conf.pop("zip")
quiet = conf.pop("quiet")
plot_type = kwargs.pop("plot_type")
filepath = kwargs.pop("filepath")
pages = conf.pop('pages')
output = conf.pop('output')
f = conf.pop('format')
compress = conf.pop('zip')
plot_type = kwargs.pop('plot_type')
filepath = kwargs.pop('filepath')
kwargs.update(conf)
table_regions = list(kwargs["table_regions"])
kwargs["table_regions"] = None if not table_regions else table_regions
table_areas = list(kwargs["table_areas"])
kwargs["table_areas"] = None if not table_areas else table_areas
columns = list(kwargs["columns"])
kwargs["columns"] = None if not columns else columns
table_area = list(kwargs['table_area'])
kwargs['table_area'] = None if not table_area else table_area
columns = list(kwargs['columns'])
kwargs['columns'] = None if not columns else columns
if plot_type is not None:
if not _HAS_MPL:
raise ImportError("matplotlib is required for plotting.")
else:
if output is None:
raise click.UsageError("Please specify output file path using --output")
if f is None:
raise click.UsageError("Please specify output file format using --format")
tables = read_pdf(
filepath, pages=pages, flavor="stream", suppress_stdout=quiet, **kwargs
)
click.echo(f"Found {tables.n} tables")
tables = read_pdf(filepath, pages=pages, flavor='stream', **kwargs)
click.echo('Found {} tables'.format(tables.n))
if plot_type is not None:
for table in tables:
plot(table, kind=plot_type)
plt.show()
table.plot(plot_type)
else:
tables.export(output, f=f, compress=compress)
if output is None:
raise click.UsageError('Please specify output file path using --output')
if f is None:
raise click.UsageError('Please specify output file format using --format')
tables.export(output, f=f, compress=compress)

View File

@ -1,231 +1,14 @@
# -*- coding: utf-8 -*-
import os
import sqlite3
import json
import zipfile
import tempfile
from itertools import chain
from operator import itemgetter
import numpy as np
import pandas as pd
# minimum number of vertical textline intersections for a textedge
# to be considered valid
TEXTEDGE_REQUIRED_ELEMENTS = 4
# padding added to table area on the left, right and bottom
TABLE_AREA_PADDING = 10
class TextEdge(object):
"""Defines a text edge coordinates relative to a left-bottom
origin. (PDF coordinate space)
Parameters
----------
x : float
x-coordinate of the text edge.
y0 : float
y-coordinate of bottommost point.
y1 : float
y-coordinate of topmost point.
align : string, optional (default: 'left')
{'left', 'right', 'middle'}
Attributes
----------
intersections: int
Number of intersections with horizontal text rows.
is_valid: bool
A text edge is valid if it intersections with at least
TEXTEDGE_REQUIRED_ELEMENTS horizontal text rows.
"""
def __init__(self, x, y0, y1, align="left"):
self.x = x
self.y0 = y0
self.y1 = y1
self.align = align
self.intersections = 0
self.is_valid = False
def __repr__(self):
x = round(self.x, 2)
y0 = round(self.y0, 2)
y1 = round(self.y1, 2)
return (
f"<TextEdge x={x} y0={y0} y1={y1} align={self.align} valid={self.is_valid}>"
)
def update_coords(self, x, y0, edge_tol=50):
"""Updates the text edge's x and bottom y coordinates and sets
the is_valid attribute.
"""
if np.isclose(self.y0, y0, atol=edge_tol):
self.x = (self.intersections * self.x + x) / float(self.intersections + 1)
self.y0 = y0
self.intersections += 1
# a textedge is valid only if it extends uninterrupted
# over a required number of textlines
if self.intersections > TEXTEDGE_REQUIRED_ELEMENTS:
self.is_valid = True
class TextEdges(object):
"""Defines a dict of left, right and middle text edges found on
the PDF page. The dict has three keys based on the alignments,
and each key's value is a list of camelot.core.TextEdge objects.
"""
def __init__(self, edge_tol=50):
self.edge_tol = edge_tol
self._textedges = {"left": [], "right": [], "middle": []}
@staticmethod
def get_x_coord(textline, align):
"""Returns the x coordinate of a text row based on the
specified alignment.
"""
x_left = textline.x0
x_right = textline.x1
x_middle = x_left + (x_right - x_left) / 2.0
x_coord = {"left": x_left, "middle": x_middle, "right": x_right}
return x_coord[align]
def find(self, x_coord, align):
"""Returns the index of an existing text edge using
the specified x coordinate and alignment.
"""
for i, te in enumerate(self._textedges[align]):
if np.isclose(te.x, x_coord, atol=0.5):
return i
return None
def add(self, textline, align):
"""Adds a new text edge to the current dict."""
x = self.get_x_coord(textline, align)
y0 = textline.y0
y1 = textline.y1
te = TextEdge(x, y0, y1, align=align)
self._textedges[align].append(te)
def update(self, textline):
"""Updates an existing text edge in the current dict."""
for align in ["left", "right", "middle"]:
x_coord = self.get_x_coord(textline, align)
idx = self.find(x_coord, align)
if idx is None:
self.add(textline, align)
else:
self._textedges[align][idx].update_coords(
x_coord, textline.y0, edge_tol=self.edge_tol
)
def generate(self, textlines):
"""Generates the text edges dict based on horizontal text
rows.
"""
for tl in textlines:
if len(tl.get_text().strip()) > 1: # TODO: hacky
self.update(tl)
def get_relevant(self):
"""Returns the list of relevant text edges (all share the same
alignment) based on which list intersects horizontal text rows
the most.
"""
intersections_sum = {
"left": sum(
te.intersections for te in self._textedges["left"] if te.is_valid
),
"right": sum(
te.intersections for te in self._textedges["right"] if te.is_valid
),
"middle": sum(
te.intersections for te in self._textedges["middle"] if te.is_valid
),
}
# TODO: naive
# get vertical textedges that intersect maximum number of
# times with horizontal textlines
relevant_align = max(intersections_sum.items(), key=itemgetter(1))[0]
return self._textedges[relevant_align]
def get_table_areas(self, textlines, relevant_textedges):
"""Returns a dict of interesting table areas on the PDF page
calculated using relevant text edges.
"""
def pad(area, average_row_height):
x0 = area[0] - TABLE_AREA_PADDING
y0 = area[1] - TABLE_AREA_PADDING
x1 = area[2] + TABLE_AREA_PADDING
# add a constant since table headers can be relatively up
y1 = area[3] + average_row_height * 5
return (x0, y0, x1, y1)
# sort relevant textedges in reading order
relevant_textedges.sort(key=lambda te: (-te.y0, te.x))
table_areas = {}
for te in relevant_textedges:
if te.is_valid:
if not table_areas:
table_areas[(te.x, te.y0, te.x, te.y1)] = None
else:
found = None
for area in table_areas:
# check for overlap
if te.y1 >= area[1] and te.y0 <= area[3]:
found = area
break
if found is None:
table_areas[(te.x, te.y0, te.x, te.y1)] = None
else:
table_areas.pop(found)
updated_area = (
found[0],
min(te.y0, found[1]),
max(found[2], te.x),
max(found[3], te.y1),
)
table_areas[updated_area] = None
# extend table areas based on textlines that overlap
# vertically. it's possible that these textlines were
# eliminated during textedges generation since numbers and
# chars/words/sentences are often aligned differently.
# drawback: table areas that have paragraphs on their sides
# will include the paragraphs too.
sum_textline_height = 0
for tl in textlines:
sum_textline_height += tl.y1 - tl.y0
found = None
for area in table_areas:
# check for overlap
if tl.y0 >= area[1] and tl.y1 <= area[3]:
found = area
break
if found is not None:
table_areas.pop(found)
updated_area = (
min(tl.x0, found[0]),
min(tl.y0, found[1]),
max(found[2], tl.x1),
max(found[3], tl.y1),
)
table_areas[updated_area] = None
average_textline_height = sum_textline_height / float(len(textlines))
# add some padding to table areas
table_areas_padded = {}
for area in table_areas:
table_areas_padded[pad(area, average_textline_height)] = None
return table_areas_padded
from .plotting import *
class Cell(object):
@ -285,14 +68,11 @@ class Cell(object):
self.bottom = False
self.hspan = False
self.vspan = False
self._text = ""
self._text = ''
def __repr__(self):
x1 = round(self.x1)
y1 = round(self.y1)
x2 = round(self.x2)
y2 = round(self.y2)
return f"<Cell x1={x1} y1={y1} x2={x2} y2={y2}>"
return '<Cell x1={} y1={} x2={} y2={}>'.format(
self.x1, self.y1, self.x2, self.y2)
@property
def text(self):
@ -300,11 +80,12 @@ class Cell(object):
@text.setter
def text(self, t):
self._text = "".join([self._text, t])
self._text = ''.join([self._text, t])
@property
def bound(self):
"""The number of sides on which the cell is bounded."""
"""The number of sides on which the cell is bounded.
"""
return self.top + self.bottom + self.left + self.right
@ -336,11 +117,11 @@ class Table(object):
PDF page number.
"""
def __init__(self, cols, rows):
self.cols = cols
self.rows = rows
self.cells = [[Cell(c[0], r[1], c[1], r[0]) for c in cols] for r in rows]
self.cells = [[Cell(c[0], r[1], c[1], r[0])
for c in cols] for r in rows]
self.df = None
self.shape = (0, 0)
self.accuracy = 0
@ -349,18 +130,12 @@ class Table(object):
self.page = None
def __repr__(self):
return f"<{self.__class__.__name__} shape={self.shape}>"
def __lt__(self, other):
if self.page == other.page:
if self.order < other.order:
return True
if self.page < other.page:
return True
return '<{} shape={}>'.format(self.__class__.__name__, self.shape)
@property
def data(self):
"""Returns two-dimensional list of strings in table."""
"""Returns two-dimensional list of strings in table.
"""
d = []
for row in self.cells:
d.append([cell.text.strip() for cell in row])
@ -373,21 +148,22 @@ class Table(object):
"""
# pretty?
report = {
"accuracy": round(self.accuracy, 2),
"whitespace": round(self.whitespace, 2),
"order": self.order,
"page": self.page,
'accuracy': round(self.accuracy, 2),
'whitespace': round(self.whitespace, 2),
'order': self.order,
'page': self.page
}
return report
def set_all_edges(self):
"""Sets all table edges to True."""
"""Sets all table edges to True.
"""
for row in self.cells:
for cell in row:
cell.left = cell.right = cell.top = cell.bottom = True
return self
def set_edges(self, vertical, horizontal, joint_tol=2):
def set_edges(self, vertical, horizontal, joint_close_tol=2):
"""Sets a cell's edges to True depending on whether the cell's
coordinates overlap with the line's coordinates within a
tolerance.
@ -403,21 +179,12 @@ class Table(object):
for v in vertical:
# find closest x coord
# iterate over y coords and find closest start and end points
i = [
i
for i, t in enumerate(self.cols)
if np.isclose(v[0], t[0], atol=joint_tol)
]
j = [
j
for j, t in enumerate(self.rows)
if np.isclose(v[3], t[0], atol=joint_tol)
]
k = [
k
for k, t in enumerate(self.rows)
if np.isclose(v[1], t[0], atol=joint_tol)
]
i = [i for i, t in enumerate(self.cols)
if np.isclose(v[0], t[0], atol=joint_close_tol)]
j = [j for j, t in enumerate(self.rows)
if np.isclose(v[3], t[0], atol=joint_close_tol)]
k = [k for k, t in enumerate(self.rows)
if np.isclose(v[1], t[0], atol=joint_close_tol)]
if not j:
continue
J = j[0]
@ -463,21 +230,12 @@ class Table(object):
for h in horizontal:
# find closest y coord
# iterate over x coords and find closest start and end points
i = [
i
for i, t in enumerate(self.rows)
if np.isclose(h[1], t[0], atol=joint_tol)
]
j = [
j
for j, t in enumerate(self.cols)
if np.isclose(h[0], t[0], atol=joint_tol)
]
k = [
k
for k, t in enumerate(self.cols)
if np.isclose(h[2], t[0], atol=joint_tol)
]
i = [i for i, t in enumerate(self.rows)
if np.isclose(h[1], t[0], atol=joint_close_tol)]
j = [j for j, t in enumerate(self.cols)
if np.isclose(h[0], t[0], atol=joint_close_tol)]
k = [k for k, t in enumerate(self.cols)
if np.isclose(h[2], t[0], atol=joint_close_tol)]
if not j:
continue
J = j[0]
@ -494,7 +252,7 @@ class Table(object):
self.cells[L][J].top = True
J += 1
elif i == []: # only bottom edge
L = len(self.rows) - 1
I = len(self.rows) - 1
if k:
K = k[0]
while J < K:
@ -523,7 +281,8 @@ class Table(object):
return self
def set_border(self):
"""Sets table border edges to True."""
"""Sets table border edges to True.
"""
for r in range(len(self.rows)):
self.cells[r][0].left = True
self.cells[r][len(self.cols) - 1].right = True
@ -563,6 +322,33 @@ class Table(object):
cell.hspan = True
return self
def plot(self, geometry_type):
"""Plot geometry found on PDF page based on geometry_type
specified, useful for debugging and playing with different
parameters to get the best output.
Parameters
----------
geometry_type : str
The geometry type for which a plot should be generated.
Can be 'text', 'table', 'contour', 'joint', 'line'
"""
if self.flavor == 'stream' and geometry_type in ['contour', 'joint', 'line']:
raise NotImplementedError("{} cannot be plotted with flavor='stream'".format(
geometry_type))
if geometry_type == 'text':
plot_text(self._text)
elif geometry_type == 'table':
plot_table(self)
elif geometry_type == 'contour':
plot_contour(self._image)
elif geometry_type == 'joint':
plot_joint(self._image)
elif geometry_type == 'line':
plot_line(self._segments)
def to_csv(self, path, **kwargs):
"""Writes Table to a comma-separated values (csv) file.
@ -574,7 +360,12 @@ class Table(object):
Output filepath.
"""
kw = {"encoding": "utf-8", "index": False, "header": False, "quoting": 1}
kw = {
'encoding': 'utf-8',
'index': False,
'header': False,
'quoting': 1
}
kw.update(kwargs)
self.df.to_csv(path, **kw)
@ -589,10 +380,12 @@ class Table(object):
Output filepath.
"""
kw = {"orient": "records"}
kw = {
'orient': 'records'
}
kw.update(kwargs)
json_string = self.df.to_json(**kw)
with open(path, "w") as f:
with open(path, 'w') as f:
f.write(json_string)
def to_excel(self, path, **kwargs):
@ -607,8 +400,8 @@ class Table(object):
"""
kw = {
"sheet_name": f"page-{self.page}-table-{self.order}",
"encoding": "utf-8",
'sheet_name': 'page-{}-table-{}'.format(self.page, self.order),
'encoding': 'utf-8'
}
kw.update(kwargs)
writer = pd.ExcelWriter(path)
@ -627,43 +420,9 @@ class Table(object):
"""
html_string = self.df.to_html(**kwargs)
with open(path, "w", encoding="utf-8") as f:
with open(path, 'w') as f:
f.write(html_string)
def to_markdown(self, path, **kwargs):
"""Writes Table to a Markdown file.
For kwargs, check :meth:`pandas.DataFrame.to_markdown`.
Parameters
----------
path : str
Output filepath.
"""
md_string = self.df.to_markdown(**kwargs)
with open(path, "w", encoding="utf-8") as f:
f.write(md_string)
def to_sqlite(self, path, **kwargs):
"""Writes Table to sqlite database.
For kwargs, check :meth:`pandas.DataFrame.to_sql`.
Parameters
----------
path : str
Output filepath.
"""
kw = {"if_exists": "replace", "index": False}
kw.update(kwargs)
conn = sqlite3.connect(path)
table_name = f"page-{self.page}-table-{self.order}"
self.df.to_sql(table_name, conn, **kw)
conn.commit()
conn.close()
class TableList(object):
"""Defines a list of camelot.core.Table objects. Each table can
@ -675,12 +434,12 @@ class TableList(object):
Number of tables in the list.
"""
def __init__(self, tables):
self._tables = tables
def __repr__(self):
return f"<{self.__class__.__name__} n={self.n}>"
return '<{} tables={}>'.format(
self.__class__.__name__, len(self._tables))
def __len__(self):
return len(self._tables)
@ -688,37 +447,51 @@ class TableList(object):
def __getitem__(self, idx):
return self._tables[idx]
def __iter__(self):
self._n = 0
return self
def next(self):
if self._n < len(self):
r = self._tables[self._n]
self._n += 1
return r
else:
raise StopIteration
@staticmethod
def _format_func(table, f):
return getattr(table, f"to_{f}")
return getattr(table, 'to_{}'.format(f))
@property
def n(self):
return len(self)
def _write_file(self, f=None, **kwargs):
dirname = kwargs.get("dirname")
root = kwargs.get("root")
ext = kwargs.get("ext")
dirname = kwargs.get('dirname')
root = kwargs.get('root')
ext = kwargs.get('ext')
for table in self._tables:
filename = f"{root}-page-{table.page}-table-{table.order}{ext}"
filename = os.path.join('{}-page-{}-table-{}{}'.format(
root, table.page, table.order, ext))
filepath = os.path.join(dirname, filename)
to_format = self._format_func(table, f)
to_format(filepath)
def _compress_dir(self, **kwargs):
path = kwargs.get("path")
dirname = kwargs.get("dirname")
root = kwargs.get("root")
ext = kwargs.get("ext")
zipname = os.path.join(os.path.dirname(path), root) + ".zip"
with zipfile.ZipFile(zipname, "w", allowZip64=True) as z:
path = kwargs.get('path')
dirname = kwargs.get('dirname')
root = kwargs.get('root')
ext = kwargs.get('ext')
zipname = os.path.join(os.path.dirname(path), root) + '.zip'
with zipfile.ZipFile(zipname, 'w', allowZip64=True) as z:
for table in self._tables:
filename = f"{root}-page-{table.page}-table-{table.order}{ext}"
filename = os.path.join('{}-page-{}-table-{}{}'.format(
root, table.page, table.order, ext))
filepath = os.path.join(dirname, filename)
z.write(filepath, os.path.basename(filepath))
def export(self, path, f="csv", compress=False):
def export(self, path, f='csv', compress=False):
"""Exports the list of tables to specified file format.
Parameters
@ -726,7 +499,7 @@ class TableList(object):
path : str
Output filepath.
f : str
File format. Can be csv, excel, html, json, markdown or sqlite.
File format. Can be csv, json, excel and html.
compress : bool
Whether or not to add files to a ZIP archive.
@ -737,28 +510,25 @@ class TableList(object):
if compress:
dirname = tempfile.mkdtemp()
kwargs = {"path": path, "dirname": dirname, "root": root, "ext": ext}
kwargs = {
'path': path,
'dirname': dirname,
'root': root,
'ext': ext
}
if f in ["csv", "html", "json", "markdown"]:
if f in ['csv', 'json', 'html']:
self._write_file(f=f, **kwargs)
if compress:
self._compress_dir(**kwargs)
elif f == "excel":
elif f == 'excel':
filepath = os.path.join(dirname, basename)
writer = pd.ExcelWriter(filepath)
for table in self._tables:
sheet_name = f"page-{table.page}-table-{table.order}"
table.df.to_excel(writer, sheet_name=sheet_name, encoding="utf-8")
sheet_name = 'page-{}-table-{}'.format(table.page, table.order)
table.df.to_excel(writer, sheet_name=sheet_name, encoding='utf-8')
writer.save()
if compress:
zipname = os.path.join(os.path.dirname(path), root) + ".zip"
with zipfile.ZipFile(zipname, "w", allowZip64=True) as z:
z.write(filepath, os.path.basename(filepath))
elif f == "sqlite":
filepath = os.path.join(dirname, basename)
for table in self._tables:
table.to_sqlite(filepath)
if compress:
zipname = os.path.join(os.path.dirname(path), root) + ".zip"
with zipfile.ZipFile(zipname, "w", allowZip64=True) as z:
z.write(filepath, os.path.basename(filepath))
zipname = os.path.join(os.path.dirname(path), root) + '.zip'
with zipfile.ZipFile(zipname, 'w', allowZip64=True) as z:
z.write(filepath, os.path.basename(filepath))

View File

@ -1,20 +1,13 @@
# -*- coding: utf-8 -*-
import os
import sys
from PyPDF2 import PdfFileReader, PdfFileWriter
from .core import TableList
from .parsers import Stream, Lattice
from .utils import (
TemporaryDirectory,
get_page_layout,
get_text_objects,
get_rotation,
is_url,
download_url,
)
from .utils import (TemporaryDirectory, get_page_layout, get_text_objects,
get_rotation)
class PDFHandler(object):
@ -24,41 +17,29 @@ class PDFHandler(object):
Parameters
----------
filepath : str
Filepath or URL of the PDF file.
filename : str
Path to PDF file.
pages : str, optional (default: '1')
Comma-separated page numbers.
Example: '1,3,4' or '1,4-end' or 'all'.
password : str, optional (default: None)
Password for decryption.
Example: 1,3,4 or 1,4-end.
"""
def __init__(self, filename, pages='1'):
self.filename = filename
if not self.filename.endswith('.pdf'):
raise TypeError("File format not supported.")
self.pages = self._get_pages(self.filename, pages)
def __init__(self, filepath, pages="1", password=None):
if is_url(filepath):
filepath = download_url(filepath)
self.filepath = filepath
# if not filepath.lower().endswith(".pdf"):
# raise NotImplementedError("File format not supported")
if password is None:
self.password = ""
else:
self.password = password
if sys.version_info[0] < 3:
self.password = self.password.encode("ascii")
self.pages = self._get_pages(pages)
def _get_pages(self, pages):
def _get_pages(self, filename, pages):
"""Converts pages string to list of ints.
Parameters
----------
filepath : str
Filepath or URL of the PDF file.
filename : str
Path to PDF file.
pages : str, optional (default: '1')
Comma-separated page numbers.
Example: '1,3,4' or '1,4-end' or 'all'.
Example: 1,3,4 or 1,4-end.
Returns
-------
@ -67,84 +48,73 @@ class PDFHandler(object):
"""
page_numbers = []
if pages == "1":
page_numbers.append({"start": 1, "end": 1})
if pages == '1':
page_numbers.append({'start': 1, 'end': 1})
else:
with open(self.filepath, "rb") as f:
infile = PdfFileReader(f, strict=False)
if infile.isEncrypted:
infile.decrypt(self.password)
if pages == "all":
page_numbers.append({"start": 1, "end": infile.getNumPages()})
else:
for r in pages.split(","):
if "-" in r:
a, b = r.split("-")
if b == "end":
b = infile.getNumPages()
page_numbers.append({"start": int(a), "end": int(b)})
else:
page_numbers.append({"start": int(r), "end": int(r)})
infile = PdfFileReader(open(filename, 'rb'), strict=False)
if pages == 'all':
page_numbers.append({'start': 1, 'end': infile.getNumPages()})
else:
for r in pages.split(','):
if '-' in r:
a, b = r.split('-')
if b == 'end':
b = infile.getNumPages()
page_numbers.append({'start': int(a), 'end': int(b)})
else:
page_numbers.append({'start': int(r), 'end': int(r)})
P = []
for p in page_numbers:
P.extend(range(p["start"], p["end"] + 1))
P.extend(range(p['start'], p['end'] + 1))
return sorted(set(P))
def _save_page(self, filepath, page, temp):
def _save_page(self, filename, page, temp):
"""Saves specified page from PDF into a temporary directory.
Parameters
----------
filepath : str
Filepath or URL of the PDF file.
filename : str
Path to PDF file.
page : int
Page number.
temp : str
Tmp directory.
"""
with open(filepath, "rb") as fileobj:
with open(filename, 'rb') as fileobj:
infile = PdfFileReader(fileobj, strict=False)
if infile.isEncrypted:
infile.decrypt(self.password)
fpath = os.path.join(temp, f"page-{page}.pdf")
infile.decrypt('')
fpath = os.path.join(temp, 'page-{0}.pdf'.format(page))
froot, fext = os.path.splitext(fpath)
p = infile.getPage(page - 1)
outfile = PdfFileWriter()
outfile.addPage(p)
with open(fpath, "wb") as f:
with open(fpath, 'wb') as f:
outfile.write(f)
layout, dim = get_page_layout(fpath)
# fix rotated PDF
chars = get_text_objects(layout, ltype="char")
horizontal_text = get_text_objects(layout, ltype="horizontal_text")
vertical_text = get_text_objects(layout, ltype="vertical_text")
rotation = get_rotation(chars, horizontal_text, vertical_text)
if rotation != "":
fpath_new = "".join([froot.replace("page", "p"), "_rotated", fext])
lttextlh = get_text_objects(layout, ltype="lh")
lttextlv = get_text_objects(layout, ltype="lv")
ltchar = get_text_objects(layout, ltype="char")
rotation = get_rotation(lttextlh, lttextlv, ltchar)
if rotation != '':
fpath_new = ''.join([froot.replace('page', 'p'), '_rotated', fext])
os.rename(fpath, fpath_new)
instream = open(fpath_new, "rb")
infile = PdfFileReader(instream, strict=False)
infile = PdfFileReader(open(fpath_new, 'rb'), strict=False)
if infile.isEncrypted:
infile.decrypt(self.password)
infile.decrypt('')
outfile = PdfFileWriter()
p = infile.getPage(0)
if rotation == "anticlockwise":
if rotation == 'anticlockwise':
p.rotateClockwise(90)
elif rotation == "clockwise":
elif rotation == 'clockwise':
p.rotateCounterClockwise(90)
outfile.addPage(p)
with open(fpath, "wb") as f:
with open(fpath, 'wb') as f:
outfile.write(f)
instream.close()
def parse(
self, flavor="lattice", suppress_stdout=False, layout_kwargs={}, **kwargs
):
def parse(self, flavor='lattice', **kwargs):
"""Extracts tables by calling parser.get_tables on all single
page PDFs.
@ -153,10 +123,6 @@ class PDFHandler(object):
flavor : str (default: 'lattice')
The parsing method to use ('lattice' or 'stream').
Lattice is used by default.
suppress_stdout : str (default: False)
Suppress logs and warnings.
layout_kwargs : dict, optional (default: {})
A dict of `pdfminer.layout.LAParams <https://github.com/euske/pdfminer/blob/master/pdfminer/layout.py#L33>`_ kwargs.
kwargs : dict
See camelot.read_pdf kwargs.
@ -164,17 +130,19 @@ class PDFHandler(object):
-------
tables : camelot.core.TableList
List of tables found in PDF.
geometry : camelot.core.GeometryList
List of geometry objects (contours, lines, joints) found
in PDF.
"""
tables = []
with TemporaryDirectory() as tempdir:
for p in self.pages:
self._save_page(self.filepath, p, tempdir)
pages = [os.path.join(tempdir, f"page-{p}.pdf") for p in self.pages]
parser = Lattice(**kwargs) if flavor == "lattice" else Stream(**kwargs)
self._save_page(self.filename, p, tempdir)
pages = [os.path.join(tempdir, 'page-{0}.pdf'.format(p))
for p in self.pages]
parser = Lattice(**kwargs) if flavor == 'lattice' else Stream(**kwargs)
for p in pages:
t = parser.extract_tables(
p, suppress_stdout=suppress_stdout, layout_kwargs=layout_kwargs
)
t = parser.extract_tables(p)
tables.extend(t)
return TableList(sorted(tables))
return TableList(tables)

View File

@ -1,8 +1,14 @@
# -*- coding: utf-8 -*-
from __future__ import division
from itertools import groupby
from operator import itemgetter
import cv2
import numpy as np
from .utils import merge_tuples
def adaptive_threshold(imagename, process_background=False, blocksize=15, c=-2):
"""Thresholds an image using OpenCV's adaptiveThreshold.
@ -36,24 +42,15 @@ def adaptive_threshold(imagename, process_background=False, blocksize=15, c=-2):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
if process_background:
threshold = cv2.adaptiveThreshold(
gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, blocksize, c
)
threshold = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY, blocksize, c)
else:
threshold = cv2.adaptiveThreshold(
np.invert(gray),
255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY,
blocksize,
c,
)
threshold = cv2.adaptiveThreshold(np.invert(gray), 255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, blocksize, c)
return img, threshold
def find_lines(
threshold, regions=None, direction="horizontal", line_scale=15, iterations=0
):
def find_lines(threshold, direction='horizontal', line_size_scaling=15, iterations=0):
"""Finds horizontal and vertical lines by applying morphological
transformations on an image.
@ -61,13 +58,9 @@ def find_lines(
----------
threshold : object
numpy.ndarray representing the thresholded image.
regions : list, optional (default: None)
List of page regions that may contain tables of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in image coordinate space.
direction : string, optional (default: 'horizontal')
Specifies whether to find vertical or horizontal lines.
line_scale : int, optional (default: 15)
line_size_scaling : int, optional (default: 15)
Factor by which the page dimensions will be divided to get
smallest length of lines that should be detected.
@ -91,21 +84,15 @@ def find_lines(
"""
lines = []
if direction == "vertical":
size = threshold.shape[0] // line_scale
if direction == 'vertical':
size = threshold.shape[0] // line_size_scaling
el = cv2.getStructuringElement(cv2.MORPH_RECT, (1, size))
elif direction == "horizontal":
size = threshold.shape[1] // line_scale
elif direction == 'horizontal':
size = threshold.shape[1] // line_size_scaling
el = cv2.getStructuringElement(cv2.MORPH_RECT, (size, 1))
elif direction is None:
raise ValueError("Specify direction as either 'vertical' or 'horizontal'")
if regions is not None:
region_mask = np.zeros(threshold.shape)
for region in regions:
x, y, w, h = region
region_mask[y : y + h, x : x + w] = 1
threshold = np.multiply(threshold, region_mask)
raise ValueError("Specify direction as either 'vertical' or"
" 'horizontal'")
threshold = cv2.erode(threshold, el)
threshold = cv2.dilate(threshold, el)
@ -113,27 +100,24 @@ def find_lines(
try:
_, contours, _ = cv2.findContours(
threshold.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
)
threshold, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
except ValueError:
# for opencv backward compatibility
contours, _ = cv2.findContours(
threshold.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
)
threshold, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
x, y, w, h = cv2.boundingRect(c)
x1, x2 = x, x + w
y1, y2 = y, y + h
if direction == "vertical":
if direction == 'vertical':
lines.append(((x1 + x2) // 2, y2, (x1 + x2) // 2, y1))
elif direction == "horizontal":
elif direction == 'horizontal':
lines.append((x1, (y1 + y2) // 2, x2, (y1 + y2) // 2))
return dmask, lines
def find_contours(vertical, horizontal):
def find_table_contours(vertical, horizontal):
"""Finds table boundaries using OpenCV's findContours.
Parameters
@ -155,14 +139,10 @@ def find_contours(vertical, horizontal):
try:
__, contours, __ = cv2.findContours(
mask.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
)
mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
except ValueError:
# for opencv backward compatibility
contours, __ = cv2.findContours(
mask.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
)
# sort in reverse based on contour area and use first 10 contours
mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)[:10]
cont = []
@ -173,7 +153,7 @@ def find_contours(vertical, horizontal):
return cont
def find_joints(contours, vertical, horizontal):
def find_table_joints(contours, vertical, horizontal):
"""Finds joints/intersections present inside each table boundary.
Parameters
@ -196,20 +176,17 @@ def find_joints(contours, vertical, horizontal):
and (x2, y2) -> rt in image coordinate space.
"""
joints = np.multiply(vertical, horizontal)
joints = np.bitwise_and(vertical, horizontal)
tables = {}
for c in contours:
x, y, w, h = c
roi = joints[y : y + h, x : x + w]
try:
__, jc, __ = cv2.findContours(
roi.astype(np.uint8), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE
)
roi, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
except ValueError:
# for opencv backward compatibility
jc, __ = cv2.findContours(
roi.astype(np.uint8), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE
)
roi, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
if len(jc) <= 4: # remove contours with less than 4 joints
continue
joint_coords = []
@ -220,3 +197,79 @@ def find_joints(contours, vertical, horizontal):
tables[(x, y + h, x + w, y)] = joint_coords
return tables
def remove_lines(threshold, line_size_scaling=15):
"""Removes lines from a thresholded image.
Parameters
----------
threshold : object
numpy.ndarray representing the thresholded image.
line_size_scaling : int, optional (default: 15)
Factor by which the page dimensions will be divided to get
smallest length of lines that should be detected.
The larger this value, smaller the detected lines. Making it
too large will lead to text being detected as lines.
Returns
-------
threshold : object
numpy.ndarray representing the thresholded image
with horizontal and vertical lines removed.
"""
size = threshold.shape[0] // line_size_scaling
vertical_erode_el = cv2.getStructuringElement(cv2.MORPH_RECT, (1, size))
horizontal_erode_el = cv2.getStructuringElement(cv2.MORPH_RECT, (size, 1))
dilate_el = cv2.getStructuringElement(cv2.MORPH_RECT, (10, 10))
vertical = cv2.erode(threshold, vertical_erode_el)
vertical = cv2.dilate(vertical, dilate_el)
horizontal = cv2.erode(threshold, horizontal_erode_el)
horizontal = cv2.dilate(horizontal, dilate_el)
threshold = np.bitwise_and(threshold, np.invert(vertical))
threshold = np.bitwise_and(threshold, np.invert(horizontal))
return threshold
def find_cuts(threshold, char_size_scaling=200):
"""Finds cuts made by text projections on y-axis.
Parameters
----------
threshold : object
numpy.ndarray representing the thresholded image.
line_size_scaling : int, optional (default: 200)
Factor by which the page dimensions will be divided to get
smallest length of lines that should be detected.
The larger this value, smaller the detected lines. Making it
too large will lead to text being detected as lines.
Returns
-------
y_cuts : list
List of cuts on y-axis.
"""
size = threshold.shape[0] // char_size_scaling
char_el = cv2.getStructuringElement(cv2.MORPH_RECT, (1, size))
threshold = cv2.erode(threshold, char_el)
threshold = cv2.dilate(threshold, char_el)
try:
__, contours, __ = cv2.findContours(threshold, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
except ValueError:
contours, __ = cv2.findContours(threshold, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
contours = [cv2.boundingRect(c) for c in contours]
y_cuts = [(c[1], c[1] + c[3]) for c in contours]
y_cuts = list(merge_tuples(sorted(y_cuts)))
y_cuts = [(y_cuts[i][0] + y_cuts[i - 1][1]) // 2 for i in range(1, len(y_cuts))]
return sorted(y_cuts, reverse=True)

View File

@ -1,20 +1,10 @@
# -*- coding: utf-8 -*-
import warnings
from .handlers import PDFHandler
from .utils import validate_input, remove_extra
def read_pdf(
filepath,
pages="1",
password=None,
flavor="lattice",
suppress_stdout=False,
layout_kwargs={},
**kwargs
):
def read_pdf(filepath, pages='1', flavor='lattice', **kwargs):
"""Read PDF and return extracted tables.
Note: kwargs annotated with ^ can only be used with flavor='stream'
@ -23,20 +13,14 @@ def read_pdf(
Parameters
----------
filepath : str
Filepath or URL of the PDF file.
Path to PDF file.
pages : str, optional (default: '1')
Comma-separated page numbers.
Example: '1,3,4' or '1,4-end' or 'all'.
password : str, optional (default: None)
Password for decryption.
Example: 1,3,4 or 1,4-end.
flavor : str (default: 'lattice')
The parsing method to use ('lattice' or 'stream').
Lattice is used by default.
suppress_stdout : bool, optional (default: True)
Print all logs and warnings.
layout_kwargs : dict, optional (default: {})
A dict of `pdfminer.layout.LAParams <https://github.com/euske/pdfminer/blob/master/pdfminer/layout.py#L33>`_ kwargs.
table_areas : list, optional (default: None)
table_area : list, optional (default: None)
List of table area strings of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in PDF coordinate space.
@ -48,18 +32,15 @@ def read_pdf(
flag_size : bool, optional (default: False)
Flag text based on font size. Useful to detect
super/subscripts. Adds <s></s> around flagged text.
strip_text : str, optional (default: '')
Characters that should be stripped from a string before
assigning it to a cell.
row_tol^ : int, optional (default: 2)
row_close_tol^ : int, optional (default: 2)
Tolerance parameter used to combine text vertically,
to generate rows.
column_tol^ : int, optional (default: 0)
col_close_tol^ : int, optional (default: 0)
Tolerance parameter used to combine text horizontally,
to generate columns.
process_background* : bool, optional (default: False)
Process background lines.
line_scale* : int, optional (default: 15)
line_size_scaling* : int, optional (default: 15)
Line size scaling factor. The larger the value the smaller
the detected lines. Making it very large will lead to text
being detected as lines.
@ -70,10 +51,10 @@ def read_pdf(
shift_text* : list, optional (default: ['l', 't'])
{'l', 'r', 't', 'b'}
Direction in which text in a spanning cell will flow.
line_tol* : int, optional (default: 2)
line_close_tol* : int, optional (default: 2)
Tolerance parameter used to merge close vertical and horizontal
lines.
joint_tol* : int, optional (default: 2)
joint_close_tol* : int, optional (default: 2)
Tolerance parameter used to decide whether the detected lines
and points lie close to each other.
threshold_blocksize* : int, optional (default: 15)
@ -90,30 +71,22 @@ def read_pdf(
Number of times for erosion/dilation is applied.
For more information, refer `OpenCV's dilate <https://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html#dilate>`_.
resolution* : int, optional (default: 300)
Resolution used for PDF to PNG conversion.
margins : tuple
PDFMiner char_margin, line_margin and word_margin.
For more information, refer `PDFMiner docs <https://euske.github.io/pdfminer/>`_.
Returns
-------
tables : camelot.core.TableList
"""
if flavor not in ["lattice", "stream"]:
raise NotImplementedError(
"Unknown flavor specified." " Use either 'lattice' or 'stream'"
)
if flavor not in ['lattice', 'stream']:
raise NotImplementedError("Unknown flavor specified."
" Use either 'lattice' or 'stream'")
with warnings.catch_warnings():
if suppress_stdout:
warnings.simplefilter("ignore")
validate_input(kwargs, flavor=flavor)
p = PDFHandler(filepath, pages=pages, password=password)
kwargs = remove_extra(kwargs, flavor=flavor)
tables = p.parse(
flavor=flavor,
suppress_stdout=suppress_stdout,
layout_kwargs=layout_kwargs,
**kwargs
)
return tables
validate_input(kwargs, flavor=flavor)
p = PDFHandler(filepath, pages)
kwargs = remove_extra(kwargs, flavor=flavor)
tables = p.parse(flavor=flavor, **kwargs)
return tables

View File

@ -1,4 +1,4 @@
# -*- coding: utf-8 -*-
from .stream import Stream
from .lattice import Lattice
from .lattice import Lattice

View File

@ -6,15 +6,16 @@ from ..utils import get_page_layout, get_text_objects
class BaseParser(object):
"""Defines a base parser."""
def _generate_layout(self, filename, layout_kwargs):
"""Defines a base parser.
"""
def _generate_layout(self, filename):
self.filename = filename
self.layout_kwargs = layout_kwargs
self.layout, self.dimensions = get_page_layout(filename, **layout_kwargs)
self.images = get_text_objects(self.layout, ltype="image")
self.horizontal_text = get_text_objects(self.layout, ltype="horizontal_text")
self.vertical_text = get_text_objects(self.layout, ltype="vertical_text")
self.layout, self.dimensions = get_page_layout(
self.filename,
char_margin=self.char_margin,
line_margin=self.line_margin,
word_margin=self.word_margin)
self.horizontal_text = get_text_objects(self.layout, ltype="lh")
self.vertical_text = get_text_objects(self.layout, ltype="lv")
self.pdf_width, self.pdf_height = self.dimensions
self.rootname, __ = os.path.splitext(self.filename)
self.imagename = "".join([self.rootname, ".png"])
self.rootname, __ = os.path.splitext(self.filename)

View File

@ -1,37 +1,24 @@
# -*- coding: utf-8 -*-
from __future__ import division
import os
import sys
import copy
import locale
import logging
import warnings
import subprocess
import numpy as np
import pandas as pd
from .base import BaseParser
from ..core import Table
from ..utils import (
scale_image,
scale_pdf,
segments_in_bbox,
text_in_bbox,
merge_close_lines,
get_table_index,
compute_accuracy,
compute_whitespace,
)
from ..image_processing import (
adaptive_threshold,
find_lines,
find_contours,
find_joints,
)
from ..backends.image_conversion import BACKENDS
from ..utils import (scale_image, scale_pdf, segments_in_bbox, text_in_bbox,
merge_close_lines, get_table_index, compute_accuracy,
compute_whitespace, setup_logging, encode_)
from ..image_processing import (adaptive_threshold, find_lines,
find_table_contours, find_table_joints)
logger = logging.getLogger("camelot")
logger = setup_logging(__name__)
class Lattice(BaseParser):
@ -40,17 +27,13 @@ class Lattice(BaseParser):
Parameters
----------
table_regions : list, optional (default: None)
List of page regions that may contain tables of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in PDF coordinate space.
table_areas : list, optional (default: None)
table_area : list, optional (default: None)
List of table area strings of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in PDF coordinate space.
process_background : bool, optional (default: False)
Process background lines.
line_scale : int, optional (default: 15)
line_size_scaling : int, optional (default: 15)
Line size scaling factor. The larger the value the smaller
the detected lines. Making it very large will lead to text
being detected as lines.
@ -66,13 +49,10 @@ class Lattice(BaseParser):
flag_size : bool, optional (default: False)
Flag text based on font size. Useful to detect
super/subscripts. Adds <s></s> around flagged text.
strip_text : str, optional (default: '')
Characters that should be stripped from a string before
assigning it to a cell.
line_tol : int, optional (default: 2)
line_close_tol : int, optional (default: 2)
Tolerance parameter used to merge close vertical and horizontal
lines.
joint_tol : int, optional (default: 2)
joint_close_tol : int, optional (default: 2)
Tolerance parameter used to decide whether the detected lines
and points lie close to each other.
threshold_blocksize : int, optional (default: 15)
@ -89,77 +69,30 @@ class Lattice(BaseParser):
Number of times for erosion/dilation is applied.
For more information, refer `OpenCV's dilate <https://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html#dilate>`_.
resolution : int, optional (default: 300)
Resolution used for PDF to PNG conversion.
margins : tuple
PDFMiner char_margin, line_margin and word_margin.
For more information, refer `PDFMiner docs <https://euske.github.io/pdfminer/>`_.
"""
def __init__(
self,
table_regions=None,
table_areas=None,
process_background=False,
line_scale=15,
copy_text=None,
shift_text=["l", "t"],
split_text=False,
flag_size=False,
strip_text="",
line_tol=2,
joint_tol=2,
threshold_blocksize=15,
threshold_constant=-2,
iterations=0,
resolution=300,
backend="ghostscript",
**kwargs,
):
self.table_regions = table_regions
self.table_areas = table_areas
def __init__(self, table_area=None, process_background=False,
line_size_scaling=15, copy_text=None, shift_text=['l', 't'],
split_text=False, flag_size=False, line_close_tol=2,
joint_close_tol=2, threshold_blocksize=15, threshold_constant=-2,
iterations=0, margins=(1.0, 0.5, 0.1), **kwargs):
self.table_area = table_area
self.process_background = process_background
self.line_scale = line_scale
self.line_size_scaling = line_size_scaling
self.copy_text = copy_text
self.shift_text = shift_text
self.split_text = split_text
self.flag_size = flag_size
self.strip_text = strip_text
self.line_tol = line_tol
self.joint_tol = joint_tol
self.line_close_tol = line_close_tol
self.joint_close_tol = joint_close_tol
self.threshold_blocksize = threshold_blocksize
self.threshold_constant = threshold_constant
self.iterations = iterations
self.resolution = resolution
self.backend = Lattice._get_backend(backend)
@staticmethod
def _get_backend(backend):
def implements_convert():
methods = [
method for method in dir(backend) if method.startswith("__") is False
]
return "convert" in methods
if isinstance(backend, str):
if backend not in BACKENDS.keys():
raise NotImplementedError(
f"Unknown backend '{backend}' specified. Please use either 'poppler' or 'ghostscript'."
)
if backend == "ghostscript":
warnings.warn(
"'ghostscript' will be replaced by 'poppler' as the default image conversion"
" backend in v0.12.0. You can try out 'poppler' with backend='poppler'.",
DeprecationWarning,
)
return BACKENDS[backend]()
else:
if not implements_convert():
raise NotImplementedError(
f"'{backend}' must implement a 'convert' method"
)
return backend
self.char_margin, self.line_margin, self.word_margin = margins
@staticmethod
def _reduce_index(t, idx, shift_text):
@ -187,19 +120,19 @@ class Lattice(BaseParser):
indices = []
for r_idx, c_idx, text in idx:
for d in shift_text:
if d == "l":
if d == 'l':
if t.cells[r_idx][c_idx].hspan:
while not t.cells[r_idx][c_idx].left:
c_idx -= 1
if d == "r":
if d == 'r':
if t.cells[r_idx][c_idx].hspan:
while not t.cells[r_idx][c_idx].right:
c_idx += 1
if d == "t":
if d == 't':
if t.cells[r_idx][c_idx].vspan:
while not t.cells[r_idx][c_idx].top:
r_idx -= 1
if d == "b":
if d == 'b':
if t.cells[r_idx][c_idx].vspan:
while not t.cells[r_idx][c_idx].bottom:
r_idx += 1
@ -228,37 +161,33 @@ class Lattice(BaseParser):
if f == "h":
for i in range(len(t.cells)):
for j in range(len(t.cells[i])):
if t.cells[i][j].text.strip() == "":
if t.cells[i][j].text.strip() == '':
if t.cells[i][j].hspan and not t.cells[i][j].left:
t.cells[i][j].text = t.cells[i][j - 1].text
elif f == "v":
for i in range(len(t.cells)):
for j in range(len(t.cells[i])):
if t.cells[i][j].text.strip() == "":
if t.cells[i][j].text.strip() == '':
if t.cells[i][j].vspan and not t.cells[i][j].top:
t.cells[i][j].text = t.cells[i - 1][j].text
return t
def _generate_image(self):
self.imagename = ''.join([self.rootname, '.png'])
gs_call = [
"-q", "-sDEVICE=png16m", "-o", self.imagename, "-r600", self.filename
]
if "ghostscript" in subprocess.check_output(["gs", "-version"]).lower():
gs_call.insert(0, "gs")
else:
gs_call.insert(0, "gsc")
subprocess.call(gs_call, stdout=open(os.devnull, 'w'),
stderr=subprocess.STDOUT)
def _generate_table_bbox(self):
def scale_areas(areas):
scaled_areas = []
for area in areas:
x1, y1, x2, y2 = area.split(",")
x1 = float(x1)
y1 = float(y1)
x2 = float(x2)
y2 = float(y2)
x1, y1, x2, y2 = scale_pdf((x1, y1, x2, y2), image_scalers)
scaled_areas.append((x1, y1, abs(x2 - x1), abs(y2 - y1)))
return scaled_areas
self.image, self.threshold = adaptive_threshold(
self.imagename,
process_background=self.process_background,
blocksize=self.threshold_blocksize,
c=self.threshold_constant,
)
self.imagename, process_background=self.process_background,
blocksize=self.threshold_blocksize, c=self.threshold_constant)
image_width = self.image.shape[1]
image_height = self.image.shape[0]
image_width_scaler = image_width / float(self.pdf_width)
@ -268,110 +197,85 @@ class Lattice(BaseParser):
image_scalers = (image_width_scaler, image_height_scaler, self.pdf_height)
pdf_scalers = (pdf_width_scaler, pdf_height_scaler, image_height)
if self.table_areas is None:
regions = None
if self.table_regions is not None:
regions = scale_areas(self.table_regions)
vertical_mask, vertical_segments = find_lines(
self.threshold, direction='vertical',
line_size_scaling=self.line_size_scaling, iterations=self.iterations)
horizontal_mask, horizontal_segments = find_lines(
self.threshold, direction='horizontal',
line_size_scaling=self.line_size_scaling, iterations=self.iterations)
vertical_mask, vertical_segments = find_lines(
self.threshold,
regions=regions,
direction="vertical",
line_scale=self.line_scale,
iterations=self.iterations,
)
horizontal_mask, horizontal_segments = find_lines(
self.threshold,
regions=regions,
direction="horizontal",
line_scale=self.line_scale,
iterations=self.iterations,
)
contours = find_contours(vertical_mask, horizontal_mask)
table_bbox = find_joints(contours, vertical_mask, horizontal_mask)
if self.table_area is not None:
areas = []
for area in self.table_area:
x1, y1, x2, y2 = area.split(",")
x1 = float(x1)
y1 = float(y1)
x2 = float(x2)
y2 = float(y2)
x1, y1, x2, y2 = scale_pdf((x1, y1, x2, y2), image_scalers)
areas.append((x1, y1, abs(x2 - x1), abs(y2 - y1)))
table_bbox = find_table_joints(areas, vertical_mask, horizontal_mask)
else:
vertical_mask, vertical_segments = find_lines(
self.threshold,
direction="vertical",
line_scale=self.line_scale,
iterations=self.iterations,
)
horizontal_mask, horizontal_segments = find_lines(
self.threshold,
direction="horizontal",
line_scale=self.line_scale,
iterations=self.iterations,
)
areas = scale_areas(self.table_areas)
table_bbox = find_joints(areas, vertical_mask, horizontal_mask)
contours = find_table_contours(vertical_mask, horizontal_mask)
table_bbox = find_table_joints(contours, vertical_mask, horizontal_mask)
self.table_bbox_unscaled = copy.deepcopy(table_bbox)
self.table_bbox, self.vertical_segments, self.horizontal_segments = scale_image(
table_bbox, vertical_segments, horizontal_segments, pdf_scalers
)
table_bbox, vertical_segments, horizontal_segments, pdf_scalers)
def _generate_columns_and_rows(self, table_idx, tk):
# select elements which lie within table_bbox
t_bbox = {}
v_s, h_s = segments_in_bbox(
tk, self.vertical_segments, self.horizontal_segments
)
t_bbox["horizontal"] = text_in_bbox(tk, self.horizontal_text)
t_bbox["vertical"] = text_in_bbox(tk, self.vertical_text)
t_bbox["horizontal"].sort(key=lambda x: (-x.y0, x.x0))
t_bbox["vertical"].sort(key=lambda x: (x.x0, -x.y0))
tk, self.vertical_segments, self.horizontal_segments)
t_bbox['horizontal'] = text_in_bbox(tk, self.horizontal_text)
t_bbox['vertical'] = text_in_bbox(tk, self.vertical_text)
self.t_bbox = t_bbox
for direction in t_bbox:
t_bbox[direction].sort(key=lambda x: (-x.y0, x.x0))
cols, rows = zip(*self.table_bbox[tk])
cols, rows = list(cols), list(rows)
cols.extend([tk[0], tk[2]])
rows.extend([tk[1], tk[3]])
# sort horizontal and vertical segments
cols = merge_close_lines(sorted(cols), line_tol=self.line_tol)
rows = merge_close_lines(sorted(rows, reverse=True), line_tol=self.line_tol)
cols = merge_close_lines(
sorted(cols), line_close_tol=self.line_close_tol)
rows = merge_close_lines(
sorted(rows, reverse=True), line_close_tol=self.line_close_tol)
# make grid using x and y coord of shortlisted rows and cols
cols = [(cols[i], cols[i + 1]) for i in range(0, len(cols) - 1)]
rows = [(rows[i], rows[i + 1]) for i in range(0, len(rows) - 1)]
cols = [(cols[i], cols[i + 1])
for i in range(0, len(cols) - 1)]
rows = [(rows[i], rows[i + 1])
for i in range(0, len(rows) - 1)]
return cols, rows, v_s, h_s
def _generate_table(self, table_idx, cols, rows, **kwargs):
v_s = kwargs.get("v_s")
h_s = kwargs.get("h_s")
v_s = kwargs.get('v_s')
h_s = kwargs.get('h_s')
if v_s is None or h_s is None:
raise ValueError("No segments found on {}".format(self.rootname))
raise ValueError('No segments found on {}'.format(self.rootname))
table = Table(cols, rows)
# set table edges to True using ver+hor lines
table = table.set_edges(v_s, h_s, joint_tol=self.joint_tol)
table = table.set_edges(v_s, h_s, joint_close_tol=self.joint_close_tol)
# set table border edges to True
table = table.set_border()
# set spanning cells to True
table = table.set_span()
pos_errors = []
# TODO: have a single list in place of two directional ones?
# sorted on x-coordinate based on reading order i.e. LTR or RTL
for direction in ["vertical", "horizontal"]:
for direction in self.t_bbox:
for t in self.t_bbox[direction]:
indices, error = get_table_index(
table,
t,
direction,
split_text=self.split_text,
flag_size=self.flag_size,
strip_text=self.strip_text,
)
table, t, direction, split_text=self.split_text,
flag_size=self.flag_size)
if indices[:2] != (-1, -1):
pos_errors.append(error)
indices = Lattice._reduce_index(
table, indices, shift_text=self.shift_text
)
indices = Lattice._reduce_index(table, indices, shift_text=self.shift_text)
for r_idx, c_idx, text in indices:
table.cells[r_idx][c_idx].text = text
accuracy = compute_accuracy([[100, pos_errors]])
@ -380,15 +284,16 @@ class Lattice(BaseParser):
table = Lattice._copy_spanning_text(table, copy_text=self.copy_text)
data = table.data
data = encode_(data)
table.df = pd.DataFrame(data)
table.shape = table.df.shape
whitespace = compute_whitespace(data)
table.flavor = "lattice"
table.flavor = 'lattice'
table.accuracy = accuracy
table.whitespace = whitespace
table.order = table_idx + 1
table.page = int(os.path.basename(self.rootname).replace("page-", ""))
table.page = int(os.path.basename(self.rootname).replace('page-', ''))
# for plotting
_text = []
@ -397,39 +302,27 @@ class Lattice(BaseParser):
table._text = _text
table._image = (self.image, self.table_bbox_unscaled)
table._segments = (self.vertical_segments, self.horizontal_segments)
table._textedges = None
return table
def extract_tables(self, filename, suppress_stdout=False, layout_kwargs={}):
self._generate_layout(filename, layout_kwargs)
if not suppress_stdout:
logger.info("Processing {}".format(os.path.basename(self.rootname)))
def extract_tables(self, filename):
logger.info('Processing {}'.format(os.path.basename(filename)))
self._generate_layout(filename)
if not self.horizontal_text:
if self.images:
warnings.warn(
"{} is image-based, camelot only works on"
" text-based pages.".format(os.path.basename(self.rootname))
)
else:
warnings.warn(
"No tables found on {}".format(os.path.basename(self.rootname))
)
logger.info("No tables found on {}".format(
os.path.basename(self.rootname)))
return []
self.backend.convert(self.filename, self.imagename)
self._generate_image()
self._generate_table_bbox()
_tables = []
# sort tables based on y-coord
for table_idx, tk in enumerate(
sorted(self.table_bbox.keys(), key=lambda x: x[1], reverse=True)
):
for table_idx, tk in enumerate(sorted(self.table_bbox.keys(),
key=lambda x: x[1], reverse=True)):
cols, rows, v_s, h_s = self._generate_columns_and_rows(table_idx, tk)
table = self._generate_table(table_idx, cols, rows, v_s=v_s, h_s=h_s)
table._bbox = tk
_tables.append(table)
return _tables
return _tables

View File

@ -1,18 +1,19 @@
# -*- coding: utf-8 -*-
from __future__ import division
import os
import logging
import warnings
import numpy as np
import pandas as pd
from .base import BaseParser
from ..core import TextEdges, Table
from ..utils import text_in_bbox, get_table_index, compute_accuracy, compute_whitespace
from ..core import Table
from ..utils import (text_in_bbox, get_table_index, compute_accuracy,
compute_whitespace, setup_logging, encode_)
logger = logging.getLogger("camelot")
logger = setup_logging(__name__)
class Stream(BaseParser):
@ -24,11 +25,7 @@ class Stream(BaseParser):
Parameters
----------
table_regions : list, optional (default: None)
List of page regions that may contain tables of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in PDF coordinate space.
table_areas : list, optional (default: None)
table_area : list, optional (default: None)
List of table area strings of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in PDF coordinate space.
@ -40,43 +37,29 @@ class Stream(BaseParser):
flag_size : bool, optional (default: False)
Flag text based on font size. Useful to detect
super/subscripts. Adds <s></s> around flagged text.
strip_text : str, optional (default: '')
Characters that should be stripped from a string before
assigning it to a cell.
edge_tol : int, optional (default: 50)
Tolerance parameter for extending textedges vertically.
row_tol : int, optional (default: 2)
row_close_tol : int, optional (default: 2)
Tolerance parameter used to combine text vertically,
to generate rows.
column_tol : int, optional (default: 0)
col_close_tol : int, optional (default: 0)
Tolerance parameter used to combine text horizontally,
to generate columns.
margins : tuple, optional (default: (1.0, 0.5, 0.1))
PDFMiner char_margin, line_margin and word_margin.
For more information, refer `PDFMiner docs <https://euske.github.io/pdfminer/>`_.
"""
def __init__(
self,
table_regions=None,
table_areas=None,
columns=None,
split_text=False,
flag_size=False,
strip_text="",
edge_tol=50,
row_tol=2,
column_tol=0,
**kwargs,
):
self.table_regions = table_regions
self.table_areas = table_areas
def __init__(self, table_area=None, columns=None, split_text=False,
flag_size=False, row_close_tol=2, col_close_tol=0,
margins=(1.0, 0.5, 0.1), **kwargs):
self.table_area = table_area
self.columns = columns
self._validate_columns()
self.split_text = split_text
self.flag_size = flag_size
self.strip_text = strip_text
self.edge_tol = edge_tol
self.row_tol = row_tol
self.column_tol = column_tol
self.row_close_tol = row_close_tol
self.col_close_tol = col_close_tol
self.char_margin, self.line_margin, self.word_margin = margins
@staticmethod
def _text_bbox(t_bbox):
@ -102,7 +85,7 @@ class Stream(BaseParser):
return text_bbox
@staticmethod
def _group_rows(text, row_tol=2):
def _group_rows(text, row_close_tol=2):
"""Groups PDFMiner text objects into rows vertically
within a tolerance.
@ -110,7 +93,7 @@ class Stream(BaseParser):
----------
text : list
List of PDFMiner text objects.
row_tol : int, optional (default: 2)
row_close_tol : int, optional (default: 2)
Returns
-------
@ -121,25 +104,22 @@ class Stream(BaseParser):
row_y = 0
rows = []
temp = []
for t in text:
# is checking for upright necessary?
# if t.get_text().strip() and all([obj.upright for obj in t._objs if
# type(obj) is LTChar]):
if t.get_text().strip():
if not np.isclose(row_y, t.y0, atol=row_tol):
if not np.isclose(row_y, t.y0, atol=row_close_tol):
rows.append(sorted(temp, key=lambda t: t.x0))
temp = []
row_y = t.y0
temp.append(t)
rows.append(sorted(temp, key=lambda t: t.x0))
if len(rows) > 1:
__ = rows.pop(0) # TODO: hacky
__ = rows.pop(0) # hacky
return rows
@staticmethod
def _merge_columns(l, column_tol=0):
def _merge_columns(l, col_close_tol=0):
"""Merges column boundaries horizontally if they overlap
or lie within a tolerance.
@ -147,7 +127,7 @@ class Stream(BaseParser):
----------
l : list
List of column x-coordinate tuples.
column_tol : int, optional (default: 0)
col_close_tol : int, optional (default: 0)
Returns
-------
@ -161,18 +141,17 @@ class Stream(BaseParser):
merged.append(higher)
else:
lower = merged[-1]
if column_tol >= 0:
if higher[0] <= lower[1] or np.isclose(
higher[0], lower[1], atol=column_tol
):
if col_close_tol >= 0:
if (higher[0] <= lower[1] or
np.isclose(higher[0], lower[1], atol=col_close_tol)):
upper_bound = max(lower[1], higher[1])
lower_bound = min(lower[0], higher[0])
merged[-1] = (lower_bound, upper_bound)
else:
merged.append(higher)
elif column_tol < 0:
elif col_close_tol < 0:
if higher[0] <= lower[1]:
if np.isclose(higher[0], lower[1], atol=abs(column_tol)):
if np.isclose(higher[0], lower[1], atol=abs(col_close_tol)):
merged.append(higher)
else:
upper_bound = max(lower[1], higher[1])
@ -199,18 +178,17 @@ class Stream(BaseParser):
List of continuous row y-coordinate tuples.
"""
row_mids = [
sum([(t.y0 + t.y1) / 2 for t in r]) / len(r) if len(r) > 0 else 0
for r in rows_grouped
]
row_mids = [sum([(t.y0 + t.y1) / 2 for t in r]) / len(r)
if len(r) > 0 else 0 for r in rows_grouped]
rows = [(row_mids[i] + row_mids[i - 1]) / 2 for i in range(1, len(row_mids))]
rows.insert(0, text_y_max)
rows.append(text_y_min)
rows = [(rows[i], rows[i + 1]) for i in range(0, len(rows) - 1)]
rows = [(rows[i], rows[i + 1])
for i in range(0, len(rows) - 1)]
return rows
@staticmethod
def _add_columns(cols, text, row_tol):
def _add_columns(cols, text, row_close_tol):
"""Adds columns to existing list by taking into account
the text that lies outside the current column x-coordinates.
@ -229,11 +207,10 @@ class Stream(BaseParser):
"""
if text:
text = Stream._group_rows(text, row_tol=row_tol)
text = Stream._group_rows(text, row_close_tol=row_close_tol)
elements = [len(r) for r in text]
new_cols = [
(t.x0, t.x1) for r in text if len(r) == max(elements) for t in r
]
new_cols = [(t.x0, t.x1)
for r in text if len(r) == max(elements) for t in r]
cols.extend(Stream._merge_columns(sorted(new_cols)))
return cols
@ -258,80 +235,42 @@ class Stream(BaseParser):
cols = [(cols[i][0] + cols[i - 1][1]) / 2 for i in range(1, len(cols))]
cols.insert(0, text_x_min)
cols.append(text_x_max)
cols = [(cols[i], cols[i + 1]) for i in range(0, len(cols) - 1)]
cols = [(cols[i], cols[i + 1])
for i in range(0, len(cols) - 1)]
return cols
def _validate_columns(self):
if self.table_areas is not None and self.columns is not None:
if len(self.table_areas) != len(self.columns):
raise ValueError("Length of table_areas and columns" " should be equal")
def _nurminen_table_detection(self, textlines):
"""A general implementation of the table detection algorithm
described by Anssi Nurminen's master's thesis.
Link: https://dspace.cc.tut.fi/dpub/bitstream/handle/123456789/21520/Nurminen.pdf?sequence=3
Assumes that tables are situated relatively far apart
vertically.
"""
# TODO: add support for arabic text #141
# sort textlines in reading order
textlines.sort(key=lambda x: (-x.y0, x.x0))
textedges = TextEdges(edge_tol=self.edge_tol)
# generate left, middle and right textedges
textedges.generate(textlines)
# select relevant edges
relevant_textedges = textedges.get_relevant()
self.textedges.extend(relevant_textedges)
# guess table areas using textlines and relevant edges
table_bbox = textedges.get_table_areas(textlines, relevant_textedges)
# treat whole page as table area if no table areas found
if not len(table_bbox):
table_bbox = {(0, 0, self.pdf_width, self.pdf_height): None}
return table_bbox
if self.table_area is not None and self.columns is not None:
if len(self.table_area) != len(self.columns):
raise ValueError("Length of table_area and columns"
" should be equal")
def _generate_table_bbox(self):
self.textedges = []
if self.table_areas is None:
hor_text = self.horizontal_text
if self.table_regions is not None:
# filter horizontal text
hor_text = []
for region in self.table_regions:
x1, y1, x2, y2 = region.split(",")
x1 = float(x1)
y1 = float(y1)
x2 = float(x2)
y2 = float(y2)
region_text = text_in_bbox((x1, y2, x2, y1), self.horizontal_text)
hor_text.extend(region_text)
# find tables based on nurminen's detection algorithm
table_bbox = self._nurminen_table_detection(hor_text)
else:
if self.table_area is not None:
table_bbox = {}
for area in self.table_areas:
for area in self.table_area:
x1, y1, x2, y2 = area.split(",")
x1 = float(x1)
y1 = float(y1)
x2 = float(x2)
y2 = float(y2)
table_bbox[(x1, y2, x2, y1)] = None
else:
table_bbox = {(0, 0, self.pdf_width, self.pdf_height): None}
self.table_bbox = table_bbox
def _generate_columns_and_rows(self, table_idx, tk):
# select elements which lie within table_bbox
t_bbox = {}
t_bbox["horizontal"] = text_in_bbox(tk, self.horizontal_text)
t_bbox["vertical"] = text_in_bbox(tk, self.vertical_text)
t_bbox["horizontal"].sort(key=lambda x: (-x.y0, x.x0))
t_bbox["vertical"].sort(key=lambda x: (x.x0, -x.y0))
t_bbox['horizontal'] = text_in_bbox(tk, self.horizontal_text)
t_bbox['vertical'] = text_in_bbox(tk, self.vertical_text)
self.t_bbox = t_bbox
for direction in self.t_bbox:
self.t_bbox[direction].sort(key=lambda x: (-x.y0, x.x0))
text_x_min, text_y_min, text_x_max, text_y_max = self._text_bbox(self.t_bbox)
rows_grouped = self._group_rows(self.t_bbox["horizontal"], row_tol=self.row_tol)
rows_grouped = self._group_rows(self.t_bbox['horizontal'], row_close_tol=self.row_close_tol)
rows = self._join_rows(rows_grouped, text_y_max, text_y_min)
elements = [len(r) for r in rows_grouped]
@ -340,74 +279,43 @@ class Stream(BaseParser):
# take (0, pdf_width) by default
# similar to else condition
# len can't be 1
cols = self.columns[table_idx].split(",")
cols = self.columns[table_idx].split(',')
cols = [float(c) for c in cols]
cols.insert(0, text_x_min)
cols.append(text_x_max)
cols = [(cols[i], cols[i + 1]) for i in range(0, len(cols) - 1)]
else:
# calculate mode of the list of number of elements in
# each row to guess the number of columns
if not len(elements):
cols = [(text_x_min, text_x_max)]
else:
ncols = max(set(elements), key=elements.count)
if ncols == 1:
# if mode is 1, the page usually contains not tables
# but there can be cases where the list can be skewed,
# try to remove all 1s from list in this case and
# see if the list contains elements, if yes, then use
# the mode after removing 1s
elements = list(filter(lambda x: x != 1, elements))
if len(elements):
ncols = max(set(elements), key=elements.count)
else:
warnings.warn(f"No tables found in table area {table_idx + 1}")
cols = [
(t.x0, t.x1) for r in rows_grouped if len(r) == ncols for t in r
]
cols = self._merge_columns(sorted(cols), column_tol=self.column_tol)
inner_text = []
for i in range(1, len(cols)):
left = cols[i - 1][1]
right = cols[i][0]
inner_text.extend(
[
t
for direction in self.t_bbox
ncols = max(set(elements), key=elements.count)
if ncols == 1:
logger.info("No tables found on {}".format(
os.path.basename(self.rootname)))
cols = [(t.x0, t.x1) for r in rows_grouped if len(r) == ncols for t in r]
cols = self._merge_columns(sorted(cols), col_close_tol=self.col_close_tol)
inner_text = []
for i in range(1, len(cols)):
left = cols[i - 1][1]
right = cols[i][0]
inner_text.extend([t for direction in self.t_bbox
for t in self.t_bbox[direction]
if t.x0 > left and t.x1 < right])
outer_text = [t for direction in self.t_bbox
for t in self.t_bbox[direction]
if t.x0 > left and t.x1 < right
]
)
outer_text = [
t
for direction in self.t_bbox
for t in self.t_bbox[direction]
if t.x0 > cols[-1][1] or t.x1 < cols[0][0]
]
inner_text.extend(outer_text)
cols = self._add_columns(cols, inner_text, self.row_tol)
cols = self._join_columns(cols, text_x_min, text_x_max)
if t.x0 > cols[-1][1] or t.x1 < cols[0][0]]
inner_text.extend(outer_text)
cols = self._add_columns(cols, inner_text, self.row_close_tol)
cols = self._join_columns(cols, text_x_min, text_x_max)
return cols, rows
def _generate_table(self, table_idx, cols, rows, **kwargs):
table = Table(cols, rows)
table = table.set_all_edges()
pos_errors = []
# TODO: have a single list in place of two directional ones?
# sorted on x-coordinate based on reading order i.e. LTR or RTL
for direction in ["vertical", "horizontal"]:
for direction in self.t_bbox:
for t in self.t_bbox[direction]:
indices, error = get_table_index(
table,
t,
direction,
split_text=self.split_text,
flag_size=self.flag_size,
strip_text=self.strip_text,
)
table, t, direction, split_text=self.split_text,
flag_size=self.flag_size)
if indices[:2] != (-1, -1):
pos_errors.append(error)
for r_idx, c_idx, text in indices:
@ -415,15 +323,16 @@ class Stream(BaseParser):
accuracy = compute_accuracy([[100, pos_errors]])
data = table.data
data = encode_(data)
table.df = pd.DataFrame(data)
table.shape = table.df.shape
whitespace = compute_whitespace(data)
table.flavor = "stream"
table.flavor = 'stream'
table.accuracy = accuracy
table.whitespace = whitespace
table.order = table_idx + 1
table.page = int(os.path.basename(self.rootname).replace("page-", ""))
table.page = int(os.path.basename(self.rootname).replace('page-', ''))
# for plotting
_text = []
@ -432,37 +341,26 @@ class Stream(BaseParser):
table._text = _text
table._image = None
table._segments = None
table._textedges = self.textedges
return table
def extract_tables(self, filename, suppress_stdout=False, layout_kwargs={}):
self._generate_layout(filename, layout_kwargs)
base_filename = os.path.basename(self.rootname)
if not suppress_stdout:
logger.info(f"Processing {base_filename}")
def extract_tables(self, filename):
logger.info('Processing {}'.format(os.path.basename(filename)))
self._generate_layout(filename)
if not self.horizontal_text:
if self.images:
warnings.warn(
f"{base_filename} is image-based, camelot only works on"
" text-based pages."
)
else:
warnings.warn(f"No tables found on {base_filename}")
logger.info("No tables found on {}".format(
os.path.basename(self.rootname)))
return []
self._generate_table_bbox()
_tables = []
# sort tables based on y-coord
for table_idx, tk in enumerate(
sorted(self.table_bbox.keys(), key=lambda x: x[1], reverse=True)
):
for table_idx, tk in enumerate(sorted(self.table_bbox.keys(),
key=lambda x: x[1], reverse=True)):
cols, rows = self._generate_columns_and_rows(table_idx, tk)
table = self._generate_table(table_idx, cols, rows)
table._bbox = tk
_tables.append(table)
return _tables
return _tables

View File

@ -1,225 +1,108 @@
# -*- coding: utf-8 -*-
try:
import matplotlib.pyplot as plt
import matplotlib.patches as patches
except ImportError:
_HAS_MPL = False
else:
_HAS_MPL = True
import cv2
import matplotlib.pyplot as plt
import matplotlib.patches as patches
class PlotMethods(object):
def __call__(self, table, kind="text", filename=None):
"""Plot elements found on PDF page based on kind
specified, useful for debugging and playing with different
parameters to get the best output.
def plot_text(text):
"""Generates a plot for all text present on the PDF page.
Parameters
----------
table: camelot.core.Table
A Camelot Table.
kind : str, optional (default: 'text')
{'text', 'grid', 'contour', 'joint', 'line'}
The element type for which a plot should be generated.
filepath: str, optional (default: None)
Absolute path for saving the generated plot.
Parameters
----------
text : list
Returns
-------
fig : matplotlib.fig.Figure
"""
if not _HAS_MPL:
raise ImportError("matplotlib is required for plotting.")
if table.flavor == "lattice" and kind in ["textedge"]:
raise NotImplementedError(f"Lattice flavor does not support kind='{kind}'")
elif table.flavor == "stream" and kind in ["joint", "line"]:
raise NotImplementedError(f"Stream flavor does not support kind='{kind}'")
plot_method = getattr(self, kind)
fig = plot_method(table)
if filename is not None:
fig.savefig(filename)
return None
return fig
def text(self, table):
"""Generates a plot for all text elements present
on the PDF page.
Parameters
----------
table : camelot.core.Table
Returns
-------
fig : matplotlib.fig.Figure
"""
fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal")
xs, ys = [], []
for t in table._text:
xs.extend([t[0], t[2]])
ys.extend([t[1], t[3]])
ax.add_patch(patches.Rectangle((t[0], t[1]), t[2] - t[0], t[3] - t[1]))
ax.set_xlim(min(xs) - 10, max(xs) + 10)
ax.set_ylim(min(ys) - 10, max(ys) + 10)
return fig
def grid(self, table):
"""Generates a plot for the detected table grids
on the PDF page.
Parameters
----------
table : camelot.core.Table
Returns
-------
fig : matplotlib.fig.Figure
"""
fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal")
for row in table.cells:
for cell in row:
if cell.left:
ax.plot([cell.lb[0], cell.lt[0]], [cell.lb[1], cell.lt[1]])
if cell.right:
ax.plot([cell.rb[0], cell.rt[0]], [cell.rb[1], cell.rt[1]])
if cell.top:
ax.plot([cell.lt[0], cell.rt[0]], [cell.lt[1], cell.rt[1]])
if cell.bottom:
ax.plot([cell.lb[0], cell.rb[0]], [cell.lb[1], cell.rb[1]])
return fig
def contour(self, table):
"""Generates a plot for all table boundaries present
on the PDF page.
Parameters
----------
table : camelot.core.Table
Returns
-------
fig : matplotlib.fig.Figure
"""
try:
img, table_bbox = table._image
_FOR_LATTICE = True
except TypeError:
img, table_bbox = (None, {table._bbox: None})
_FOR_LATTICE = False
fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal")
xs, ys = [], []
if not _FOR_LATTICE:
for t in table._text:
xs.extend([t[0], t[2]])
ys.extend([t[1], t[3]])
ax.add_patch(
patches.Rectangle(
(t[0], t[1]), t[2] - t[0], t[3] - t[1], color="blue"
)
)
for t in table_bbox.keys():
ax.add_patch(
patches.Rectangle(
(t[0], t[1]), t[2] - t[0], t[3] - t[1], fill=False, color="red"
)
"""
fig = plt.figure()
ax = fig.add_subplot(111, aspect='equal')
xs, ys = [], []
for t in text:
xs.extend([t[0], t[2]])
ys.extend([t[1], t[3]])
ax.add_patch(
patches.Rectangle(
(t[0], t[1]),
t[2] - t[0],
t[3] - t[1]
)
if not _FOR_LATTICE:
xs.extend([t[0], t[2]])
ys.extend([t[1], t[3]])
ax.set_xlim(min(xs) - 10, max(xs) + 10)
ax.set_ylim(min(ys) - 10, max(ys) + 10)
)
ax.set_xlim(min(xs) - 10, max(xs) + 10)
ax.set_ylim(min(ys) - 10, max(ys) + 10)
plt.show()
if _FOR_LATTICE:
ax.imshow(img)
return fig
def textedge(self, table):
"""Generates a plot for relevant textedges.
def plot_table(table):
"""Generates a plot for the table.
Parameters
----------
table : camelot.core.Table
Parameters
----------
table : camelot.core.Table
Returns
-------
fig : matplotlib.fig.Figure
"""
for row in table.cells:
for cell in row:
if cell.left:
plt.plot([cell.lb[0], cell.lt[0]],
[cell.lb[1], cell.lt[1]])
if cell.right:
plt.plot([cell.rb[0], cell.rt[0]],
[cell.rb[1], cell.rt[1]])
if cell.top:
plt.plot([cell.lt[0], cell.rt[0]],
[cell.lt[1], cell.rt[1]])
if cell.bottom:
plt.plot([cell.lb[0], cell.rb[0]],
[cell.lb[1], cell.rb[1]])
plt.show()
"""
fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal")
xs, ys = [], []
for t in table._text:
xs.extend([t[0], t[2]])
ys.extend([t[1], t[3]])
ax.add_patch(
patches.Rectangle((t[0], t[1]), t[2] - t[0], t[3] - t[1], color="blue")
)
ax.set_xlim(min(xs) - 10, max(xs) + 10)
ax.set_ylim(min(ys) - 10, max(ys) + 10)
for te in table._textedges:
ax.plot([te.x, te.x], [te.y0, te.y1])
def plot_contour(image):
"""Generates a plot for all table boundaries present on the
PDF page.
return fig
Parameters
----------
image : tuple
def joint(self, table):
"""Generates a plot for all line intersections present
on the PDF page.
"""
img, table_bbox = image
for t in table_bbox.keys():
cv2.rectangle(img, (t[0], t[1]),
(t[2], t[3]), (255, 0, 0), 20)
plt.imshow(img)
plt.show()
Parameters
----------
table : camelot.core.Table
Returns
-------
fig : matplotlib.fig.Figure
def plot_joint(image):
"""Generates a plot for all line intersections present on the
PDF page.
"""
img, table_bbox = table._image
fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal")
x_coord = []
y_coord = []
for k in table_bbox.keys():
for coord in table_bbox[k]:
x_coord.append(coord[0])
y_coord.append(coord[1])
ax.plot(x_coord, y_coord, "ro")
ax.imshow(img)
return fig
Parameters
----------
image : tuple
def line(self, table):
"""Generates a plot for all line segments present
on the PDF page.
"""
img, table_bbox = image
x_coord = []
y_coord = []
for k in table_bbox.keys():
for coord in table_bbox[k]:
x_coord.append(coord[0])
y_coord.append(coord[1])
plt.plot(x_coord, y_coord, 'ro')
plt.imshow(img)
plt.show()
Parameters
----------
table : camelot.core.Table
Returns
-------
fig : matplotlib.fig.Figure
def plot_line(segments):
"""Generates a plot for all line segments present on the PDF page.
"""
fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal")
vertical, horizontal = table._segments
for v in vertical:
ax.plot([v[0], v[2]], [v[1], v[3]])
for h in horizontal:
ax.plot([h[0], h[2]], [h[1], h[3]])
return fig
Parameters
----------
segments : tuple
"""
vertical, horizontal = segments
for v in vertical:
plt.plot([v[0], v[2]], [v[1], v[3]])
for h in horizontal:
plt.plot([h[0], h[2]], [h[1], h[3]])
plt.show()

View File

@ -1,129 +1,62 @@
# -*- coding: utf-8 -*-
from __future__ import division
import os
import re
import random
import shutil
import string
import logging
import tempfile
import warnings
from itertools import groupby
from operator import itemgetter
import numpy as np
from pdfminer.pdfparser import PDFParser
from pdfminer.pdfdocument import PDFDocument
from pdfminer.pdfpage import PDFPage
from pdfminer.pdfpage import PDFTextExtractionNotAllowed
from pdfminer.pdfinterp import PDFResourceManager
from pdfminer.pdfinterp import PDFPageInterpreter
from pdfminer.pdfdevice import PDFDevice
from pdfminer.converter import PDFPageAggregator
from pdfminer.layout import (
LAParams,
LTAnno,
LTChar,
LTTextLineHorizontal,
LTTextLineVertical,
LTImage,
)
from urllib.request import Request, urlopen
from urllib.parse import urlparse as parse_url
from urllib.parse import uses_relative, uses_netloc, uses_params
from pdfminer.layout import (LAParams, LTAnno, LTChar, LTTextLineHorizontal,
LTTextLineVertical)
_VALID_URLS = set(uses_relative + uses_netloc + uses_params)
_VALID_URLS.discard("")
# https://github.com/pandas-dev/pandas/blob/master/pandas/io/common.py
def is_url(url):
"""Check to see if a URL has a valid protocol.
Parameters
----------
url : str or unicode
Returns
-------
isurl : bool
If url has a valid protocol return True otherwise False.
"""
try:
return parse_url(url).scheme in _VALID_URLS
except Exception:
return False
def random_string(length):
ret = ""
while length:
ret += random.choice(
string.digits + string.ascii_lowercase + string.ascii_uppercase
)
length -= 1
return ret
def download_url(url):
"""Download file from specified URL.
Parameters
----------
url : str or unicode
Returns
-------
filepath : str or unicode
Temporary filepath.
"""
filename = f"{random_string(6)}.pdf"
with tempfile.NamedTemporaryFile("wb", delete=False) as f:
headers = {"User-Agent": "Mozilla/5.0"}
request = Request(url, None, headers)
obj = urlopen(request)
content_type = obj.info().get_content_type()
if content_type != "application/pdf":
raise NotImplementedError("File format not supported")
f.write(obj.read())
filepath = os.path.join(os.path.dirname(f.name), filename)
shutil.move(f.name, filepath)
return filepath
stream_kwargs = ["columns", "edge_tol", "row_tol", "column_tol"]
stream_kwargs = [
'columns',
'row_close_tol',
'col_close_tol'
]
lattice_kwargs = [
"process_background",
"line_scale",
"copy_text",
"shift_text",
"line_tol",
"joint_tol",
"threshold_blocksize",
"threshold_constant",
"iterations",
"resolution",
'process_background',
'line_size_scaling',
'copy_text',
'shift_text',
'line_close_tol',
'joint_close_tol',
'threshold_blocksize',
'threshold_constant',
'iterations'
]
def validate_input(kwargs, flavor="lattice"):
def validate_input(kwargs, flavor='lattice', geometry_type=False):
def check_intersection(parser_kwargs, input_kwargs):
isec = set(parser_kwargs).intersection(set(input_kwargs.keys()))
if isec:
raise ValueError(
f"{','.join(sorted(isec))} cannot be used with flavor='{flavor}'"
)
raise ValueError("{} cannot be used with flavor='{}'".format(
",".join(sorted(isec)), flavor))
if flavor == "lattice":
if flavor == 'lattice':
check_intersection(stream_kwargs, kwargs)
else:
check_intersection(lattice_kwargs, kwargs)
if geometry_type:
if flavor != 'lattice' and geometry_type in ['contour', 'joint', 'line']:
raise ValueError("Use geometry_type='{}' with flavor='lattice'".format(
geometry_type))
def remove_extra(kwargs, flavor="lattice"):
if flavor == "lattice":
def remove_extra(kwargs, flavor='lattice'):
if flavor == 'lattice':
for key in kwargs.keys():
if key in stream_kwargs:
kwargs.pop(key)
@ -144,6 +77,35 @@ class TemporaryDirectory(object):
shutil.rmtree(self.name)
def setup_logging(name):
"""Sets up a logger with StreamHandler.
Parameters
----------
name : str
Returns
-------
logger : logging.Logger
"""
logger = logging.getLogger(name)
format_string = '%(asctime)s - %(levelname)s - %(funcName)s - %(message)s'
formatter = logging.Formatter(format_string, datefmt='%Y-%m-%dT%H:%M:%S')
handler = logging.StreamHandler()
handler.setLevel(logging.INFO)
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
logger = setup_logging(__name__)
def translate(x1, x2):
"""Translates x2 by x1.
@ -178,6 +140,35 @@ def scale(x, s):
return x
def rotate(x1, y1, x2, y2, angle):
"""Rotates point x2, y2 about point x1, y1 by angle.
Parameters
----------
x1 : float
y1 : float
x2 : float
y2 : float
angle : float
Angle in radians.
Returns
-------
xnew : float
ynew : float
"""
s = np.sin(angle)
c = np.cos(angle)
x2 = translate(-x1, x2)
y2 = translate(-y1, y2)
xnew = c * x2 - s * y2
ynew = s * x2 + c * y2
xnew = translate(x1, xnew)
ynew = translate(y1, ynew)
return xnew, ynew
def scale_pdf(k, factors):
"""Translates and scales pdf coordinate space to image
coordinate space.
@ -253,33 +244,29 @@ def scale_image(tables, v_segments, h_segments, factors):
v_segments_new = []
for v in v_segments:
x1, x2 = scale(v[0], scaling_factor_x), scale(v[2], scaling_factor_x)
y1, y2 = (
scale(abs(translate(-img_y, v[1])), scaling_factor_y),
scale(abs(translate(-img_y, v[3])), scaling_factor_y),
)
y1, y2 = scale(abs(translate(-img_y, v[1])), scaling_factor_y), scale(
abs(translate(-img_y, v[3])), scaling_factor_y)
v_segments_new.append((x1, y1, x2, y2))
h_segments_new = []
for h in h_segments:
x1, x2 = scale(h[0], scaling_factor_x), scale(h[2], scaling_factor_x)
y1, y2 = (
scale(abs(translate(-img_y, h[1])), scaling_factor_y),
scale(abs(translate(-img_y, h[3])), scaling_factor_y),
)
y1, y2 = scale(abs(translate(-img_y, h[1])), scaling_factor_y), scale(
abs(translate(-img_y, h[3])), scaling_factor_y)
h_segments_new.append((x1, y1, x2, y2))
return tables_new, v_segments_new, h_segments_new
def get_rotation(chars, horizontal_text, vertical_text):
def get_rotation(lttextlh, lttextlv, ltchar):
"""Detects if text in table is rotated or not using the current
transformation matrix (CTM) and returns its orientation.
Parameters
----------
horizontal_text : list
lttextlh : list
List of PDFMiner LTTextLineHorizontal objects.
vertical_text : list
lttextlv : list
List of PDFMiner LTTextLineVertical objects.
ltchar : list
List of PDFMiner LTChar objects.
@ -292,13 +279,13 @@ def get_rotation(chars, horizontal_text, vertical_text):
rotated 90 degree clockwise.
"""
rotation = ""
hlen = len([t for t in horizontal_text if t.get_text().strip()])
vlen = len([t for t in vertical_text if t.get_text().strip()])
rotation = ''
hlen = len([t for t in lttextlh if t.get_text().strip()])
vlen = len([t for t in lttextlv if t.get_text().strip()])
if hlen < vlen:
clockwise = sum(t.matrix[1] < 0 and t.matrix[2] > 0 for t in chars)
anticlockwise = sum(t.matrix[1] > 0 and t.matrix[2] < 0 for t in chars)
rotation = "anticlockwise" if clockwise < anticlockwise else "clockwise"
clockwise = sum(t.matrix[1] < 0 and t.matrix[2] > 0 for t in ltchar)
anticlockwise = sum(t.matrix[1] > 0 and t.matrix[2] < 0 for t in ltchar)
rotation = 'anticlockwise' if clockwise < anticlockwise else 'clockwise'
return rotation
@ -326,16 +313,10 @@ def segments_in_bbox(bbox, v_segments, h_segments):
"""
lb = (bbox[0], bbox[1])
rt = (bbox[2], bbox[3])
v_s = [
v
for v in v_segments
if v[1] > lb[1] - 2 and v[3] < rt[1] + 2 and lb[0] - 2 <= v[0] <= rt[0] + 2
]
h_s = [
h
for h in h_segments
if h[0] > lb[0] - 2 and h[2] < rt[0] + 2 and lb[1] - 2 <= h[1] <= rt[1] + 2
]
v_s = [v for v in v_segments if v[1] > lb[1] - 2 and
v[3] < rt[1] + 2 and lb[0] - 2 <= v[0] <= rt[0] + 2]
h_s = [h for h in h_segments if h[0] > lb[0] - 2 and
h[2] < rt[0] + 2 and lb[1] - 2 <= h[1] <= rt[1] + 2]
return v_s, h_s
@ -346,125 +327,32 @@ def text_in_bbox(bbox, text):
----------
bbox : tuple
Tuple (x1, y1, x2, y2) representing a bounding box where
(x1, y1) -> lb and (x2, y2) -> rt in the PDF coordinate
(x1, y1) -> lb and (x2, y2) -> rt in PDFMiner coordinate
space.
text : List of PDFMiner text objects.
Returns
-------
t_bbox : list
List of PDFMiner text objects that lie inside table, discarding the overlapping ones
List of PDFMiner text objects that lie inside table.
"""
lb = (bbox[0], bbox[1])
rt = (bbox[2], bbox[3])
t_bbox = [
t
for t in text
if lb[0] - 2 <= (t.x0 + t.x1) / 2.0 <= rt[0] + 2
and lb[1] - 2 <= (t.y0 + t.y1) / 2.0 <= rt[1] + 2
]
# Avoid duplicate text by discarding overlapping boxes
rest = {t for t in t_bbox}
for ba in t_bbox:
for bb in rest.copy():
if ba == bb:
continue
if bbox_intersect(ba, bb):
# if the intersection is larger than 80% of ba's size, we keep the longest
if (bbox_intersection_area(ba, bb) / bbox_area(ba)) > 0.8:
if bbox_longer(bb, ba):
rest.discard(ba)
unique_boxes = list(rest)
return unique_boxes
t_bbox = [t for t in text if lb[0] - 2 <= (t.x0 + t.x1) / 2.0
<= rt[0] + 2 and lb[1] - 2 <= (t.y0 + t.y1) / 2.0
<= rt[1] + 2]
return t_bbox
def bbox_intersection_area(ba, bb) -> float:
"""Returns area of the intersection of the bounding boxes of two PDFMiner objects.
Parameters
----------
ba : PDFMiner text object
bb : PDFMiner text object
Returns
-------
intersection_area : float
Area of the intersection of the bounding boxes of both objects
"""
x_left = max(ba.x0, bb.x0)
y_top = min(ba.y1, bb.y1)
x_right = min(ba.x1, bb.x1)
y_bottom = max(ba.y0, bb.y0)
if x_right < x_left or y_bottom > y_top:
return 0.0
intersection_area = (x_right - x_left) * (y_top - y_bottom)
return intersection_area
def bbox_area(bb) -> float:
"""Returns area of the bounding box of a PDFMiner object.
Parameters
----------
bb : PDFMiner text object
Returns
-------
area : float
Area of the bounding box of the object
"""
return (bb.x1 - bb.x0) * (bb.y1 - bb.y0)
def bbox_intersect(ba, bb) -> bool:
"""Returns True if the bounding boxes of two PDFMiner objects intersect.
Parameters
----------
ba : PDFMiner text object
bb : PDFMiner text object
Returns
-------
overlaps : bool
True if the bounding boxes intersect
"""
return ba.x1 >= bb.x0 and bb.x1 >= ba.x0 and ba.y1 >= bb.y0 and bb.y1 >= ba.y0
def bbox_longer(ba, bb) -> bool:
"""Returns True if the bounding box of the first PDFMiner object is longer or equal to the second.
Parameters
----------
ba : PDFMiner text object
bb : PDFMiner text object
Returns
-------
longer : bool
True if the bounding box of the first object is longer or equal
"""
return (ba.x1 - ba.x0) >= (bb.x1 - bb.x0)
def merge_close_lines(ar, line_tol=2):
"""Merges lines which are within a tolerance by calculating a
moving mean, based on their x or y axis projections.
def remove_close_lines(ar, line_close_tol=2):
"""Removes lines which are within a tolerance, based on their x or
y axis projections.
Parameters
----------
ar : list
line_tol : int, optional (default: 2)
line_close_tol : int, optional (default: 2)
Returns
-------
@ -477,7 +365,34 @@ def merge_close_lines(ar, line_tol=2):
ret.append(a)
else:
temp = ret[-1]
if np.isclose(temp, a, atol=line_tol):
if np.isclose(temp, a, atol=line_close_tol):
pass
else:
ret.append(a)
return ret
def merge_close_lines(ar, line_close_tol=2):
"""Merges lines which are within a tolerance by calculating a
moving mean, based on their x or y axis projections.
Parameters
----------
ar : list
line_close_tol : int, optional (default: 2)
Returns
-------
ret : list
"""
ret = []
for a in ar:
if not ret:
ret.append(a)
else:
temp = ret[-1]
if np.isclose(temp, a, atol=line_close_tol):
temp = (temp + a) / 2.0
ret[-1] = temp
else:
@ -485,33 +400,7 @@ def merge_close_lines(ar, line_tol=2):
return ret
def text_strip(text, strip=""):
"""Strips any characters in `strip` that are present in `text`.
Parameters
----------
text : str
Text to process and strip.
strip : str, optional (default: '')
Characters that should be stripped from `text`.
Returns
-------
stripped : str
"""
if not strip:
return text
stripped = re.sub(
fr"[{''.join(map(re.escape, strip))}]", "", text, flags=re.UNICODE
)
return stripped
# TODO: combine the following functions into a TextProcessor class which
# applies corresponding transformations sequentially
# (inspired from sklearn.pipeline.Pipeline)
def flag_font_size(textline, direction, strip_text=""):
def flag_font_size(textline, direction):
"""Flags super/subscripts in text by enclosing them with <s></s>.
May give false positives.
@ -521,27 +410,16 @@ def flag_font_size(textline, direction, strip_text=""):
List of PDFMiner LTChar objects.
direction : string
Direction of the PDFMiner LTTextLine object.
strip_text : str, optional (default: '')
Characters that should be stripped from a string before
assigning it to a cell.
Returns
-------
fstring : string
"""
if direction == "horizontal":
d = [
(t.get_text(), np.round(t.height, decimals=6))
for t in textline
if not isinstance(t, LTAnno)
]
elif direction == "vertical":
d = [
(t.get_text(), np.round(t.width, decimals=6))
for t in textline
if not isinstance(t, LTAnno)
]
if direction == 'horizontal':
d = [(t.get_text(), np.round(t.height, decimals=6)) for t in textline if not isinstance(t, LTAnno)]
elif direction == 'vertical':
d = [(t.get_text(), np.round(t.width, decimals=6)) for t in textline if not isinstance(t, LTAnno)]
l = [np.round(size, decimals=6) for text, size in d]
if len(set(l)) > 1:
flist = []
@ -549,21 +427,21 @@ def flag_font_size(textline, direction, strip_text=""):
for key, chars in groupby(d, itemgetter(1)):
if key == min_size:
fchars = [t[0] for t in chars]
if "".join(fchars).strip():
fchars.insert(0, "<s>")
fchars.append("</s>")
flist.append("".join(fchars))
if ''.join(fchars).strip():
fchars.insert(0, '<s>')
fchars.append('</s>')
flist.append(''.join(fchars))
else:
fchars = [t[0] for t in chars]
if "".join(fchars).strip():
flist.append("".join(fchars))
fstring = "".join(flist)
if ''.join(fchars).strip():
flist.append(''.join(fchars))
fstring = ''.join(flist).strip('\n')
else:
fstring = "".join([t.get_text() for t in textline])
return text_strip(fstring, strip_text)
fstring = ''.join([t.get_text() for t in textline]).strip('\n')
return fstring
def split_textline(table, textline, direction, flag_size=False, strip_text=""):
def split_textline(table, textline, direction, flag_size=False):
"""Splits PDFMiner LTTextLine into substrings if it spans across
multiple rows/columns.
@ -578,9 +456,6 @@ def split_textline(table, textline, direction, flag_size=False, strip_text=""):
Whether or not to highlight a substring using <s></s>
if its size is different from rest of the string. (Useful for
super and subscripts.)
strip_text : str, optional (default: '')
Characters that should be stripped from a string before
assigning it to a cell.
Returns
-------
@ -593,70 +468,38 @@ def split_textline(table, textline, direction, flag_size=False, strip_text=""):
cut_text = []
bbox = textline.bbox
try:
if direction == "horizontal" and not textline.is_empty():
x_overlap = [
i
for i, x in enumerate(table.cols)
if x[0] <= bbox[2] and bbox[0] <= x[1]
]
r_idx = [
j
for j, r in enumerate(table.rows)
if r[1] <= (bbox[1] + bbox[3]) / 2 <= r[0]
]
if direction == 'horizontal' and not textline.is_empty():
x_overlap = [i for i, x in enumerate(table.cols) if x[0] <= bbox[2] and bbox[0] <= x[1]]
r_idx = [j for j, r in enumerate(table.rows) if r[1] <= (bbox[1] + bbox[3]) / 2 <= r[0]]
r = r_idx[0]
x_cuts = [
(c, table.cells[r][c].x2) for c in x_overlap if table.cells[r][c].right
]
x_cuts = [(c, table.cells[r][c].x2) for c in x_overlap if table.cells[r][c].right]
if not x_cuts:
x_cuts = [(x_overlap[0], table.cells[r][-1].x2)]
for obj in textline._objs:
row = table.rows[r]
for cut in x_cuts:
if isinstance(obj, LTChar):
if (
row[1] <= (obj.y0 + obj.y1) / 2 <= row[0]
and (obj.x0 + obj.x1) / 2 <= cut[1]
):
if (row[1] <= (obj.y0 + obj.y1) / 2 <= row[0] and
(obj.x0 + obj.x1) / 2 <= cut[1]):
cut_text.append((r, cut[0], obj))
break
else:
# TODO: add test
if cut == x_cuts[-1]:
cut_text.append((r, cut[0] + 1, obj))
elif isinstance(obj, LTAnno):
cut_text.append((r, cut[0], obj))
elif direction == "vertical" and not textline.is_empty():
y_overlap = [
j
for j, y in enumerate(table.rows)
if y[1] <= bbox[3] and bbox[1] <= y[0]
]
c_idx = [
i
for i, c in enumerate(table.cols)
if c[0] <= (bbox[0] + bbox[2]) / 2 <= c[1]
]
elif direction == 'vertical' and not textline.is_empty():
y_overlap = [j for j, y in enumerate(table.rows) if y[1] <= bbox[3] and bbox[1] <= y[0]]
c_idx = [i for i, c in enumerate(table.cols) if c[0] <= (bbox[0] + bbox[2]) / 2 <= c[1]]
c = c_idx[0]
y_cuts = [
(r, table.cells[r][c].y1) for r in y_overlap if table.cells[r][c].bottom
]
y_cuts = [(r, table.cells[r][c].y1) for r in y_overlap if table.cells[r][c].bottom]
if not y_cuts:
y_cuts = [(y_overlap[0], table.cells[-1][c].y1)]
for obj in textline._objs:
col = table.cols[c]
for cut in y_cuts:
if isinstance(obj, LTChar):
if (
col[0] <= (obj.x0 + obj.x1) / 2 <= col[1]
and (obj.y0 + obj.y1) / 2 >= cut[1]
):
if (col[0] <= (obj.x0 + obj.x1) / 2 <= col[1] and
(obj.y0 + obj.y1) / 2 >= cut[1]):
cut_text.append((cut[0], c, obj))
break
else:
# TODO: add test
if cut == y_cuts[-1]:
cut_text.append((cut[0] - 1, c, obj))
elif isinstance(obj, LTAnno):
cut_text.append((cut[0], c, obj))
except IndexError:
@ -664,26 +507,14 @@ def split_textline(table, textline, direction, flag_size=False, strip_text=""):
grouped_chars = []
for key, chars in groupby(cut_text, itemgetter(0, 1)):
if flag_size:
grouped_chars.append(
(
key[0],
key[1],
flag_font_size(
[t[2] for t in chars], direction, strip_text=strip_text
),
)
)
grouped_chars.append((key[0], key[1], flag_font_size([t[2] for t in chars], direction)))
else:
gchars = [t[2].get_text() for t in chars]
grouped_chars.append(
(key[0], key[1], text_strip("".join(gchars), strip_text))
)
grouped_chars.append((key[0], key[1], ''.join(gchars).strip('\n')))
return grouped_chars
def get_table_index(
table, t, direction, split_text=False, flag_size=False, strip_text=""
):
def get_table_index(table, t, direction, split_text=False, flag_size=False):
"""Gets indices of the table cell where given text object lies by
comparing their y and x-coordinates.
@ -701,9 +532,6 @@ def get_table_index(
Whether or not to highlight a substring using <s></s>
if its size is different from rest of the string. (Useful for
super and subscripts)
strip_text : str, optional (default: '')
Characters that should be stripped from a string before
assigning it to a cell.
Returns
-------
@ -722,9 +550,8 @@ def get_table_index(
"""
r_idx, c_idx = [-1] * 2
for r in range(len(table.rows)):
if (t.y0 + t.y1) / 2.0 < table.rows[r][0] and (t.y0 + t.y1) / 2.0 > table.rows[
r
][1]:
if ((t.y0 + t.y1) / 2.0 < table.rows[r][0] and
(t.y0 + t.y1) / 2.0 > table.rows[r][1]):
lt_col_overlap = []
for c in table.cols:
if c[0] <= t.x1 and c[1] >= t.x0:
@ -733,13 +560,12 @@ def get_table_index(
lt_col_overlap.append(abs(left - right) / abs(c[0] - c[1]))
else:
lt_col_overlap.append(-1)
if len(list(filter(lambda x: x != -1, lt_col_overlap))) == 0:
text = t.get_text().strip("\n")
if len(filter(lambda x: x != -1, lt_col_overlap)) == 0:
text = t.get_text().strip('\n')
text_range = (t.x0, t.x1)
col_range = (table.cols[0][0], table.cols[-1][1])
warnings.warn(
f"{text} {text_range} does not lie in column range {col_range}"
)
logger.info("{} {} does not lie in column range {}".format(
text, text_range, col_range))
r_idx = r
c_idx = lt_col_overlap.index(max(lt_col_overlap))
break
@ -760,26 +586,12 @@ def get_table_index(
error = ((X * (y0_offset + y1_offset)) + (Y * (x0_offset + x1_offset))) / charea
if split_text:
return (
split_textline(
table, t, direction, flag_size=flag_size, strip_text=strip_text
),
error,
)
return split_textline(table, t, direction, flag_size=flag_size), error
else:
if flag_size:
return (
[
(
r_idx,
c_idx,
flag_font_size(t._objs, direction, strip_text=strip_text),
)
],
error,
)
return [(r_idx, c_idx, flag_font_size(t._objs, direction))], error
else:
return [(r_idx, c_idx, text_strip(t.get_text(), strip_text))], error
return [(r_idx, c_idx, t.get_text().strip('\n'))], error
def compute_accuracy(error_weights):
@ -830,35 +642,62 @@ def compute_whitespace(d):
r_nempty_cells, c_nempty_cells = [], []
for i in d:
for j in i:
if j.strip() == "":
if j.strip() == '':
whitespace += 1
whitespace = 100 * (whitespace / float(len(d) * len(d[0])))
return whitespace
def get_page_layout(
filename,
line_overlap=0.5,
char_margin=1.0,
line_margin=0.5,
word_margin=0.1,
boxes_flow=0.5,
detect_vertical=True,
all_texts=True,
):
def remove_empty(d):
"""Removes empty rows and columns from a two-dimensional list.
Parameters
----------
d : list
Returns
-------
d : list
"""
for i, row in enumerate(d):
if row == [''] * len(row):
d.pop(i)
d = zip(*d)
d = [list(row) for row in d if any(row)]
d = zip(*d)
return d
def encode_(ar):
"""Encodes two-dimensional list into unicode.
Parameters
----------
ar : list
Returns
-------
ar : list
"""
ar = [[r.encode('utf-8') for r in row] for row in ar]
return ar
def get_page_layout(filename, char_margin=1.0, line_margin=0.5, word_margin=0.1,
detect_vertical=True, all_texts=True):
"""Returns a PDFMiner LTPage object and page dimension of a single
page pdf. To get the definitions of kwargs, see
https://pdfminersix.rtfd.io/en/latest/reference/composable.html.
page pdf. See https://euske.github.io/pdfminer/ to get definitions
of kwargs.
Parameters
----------
filename : string
Path to pdf file.
line_overlap : float
char_margin : float
line_margin : float
word_margin : float
boxes_flow : float
detect_vertical : bool
all_texts : bool
@ -870,22 +709,16 @@ def get_page_layout(
Dimension of pdf page in the form (width, height).
"""
with open(filename, "rb") as f:
with open(filename, 'r') as f:
parser = PDFParser(f)
document = PDFDocument(parser)
if not document.is_extractable:
raise PDFTextExtractionNotAllowed(
f"Text extraction is not allowed: {filename}"
)
laparams = LAParams(
line_overlap=line_overlap,
char_margin=char_margin,
line_margin=line_margin,
word_margin=word_margin,
boxes_flow=boxes_flow,
detect_vertical=detect_vertical,
all_texts=all_texts,
)
raise PDFTextExtractionNotAllowed
laparams = LAParams(char_margin=char_margin,
line_margin=line_margin,
word_margin=word_margin,
detect_vertical=detect_vertical,
all_texts=all_texts)
rsrcmgr = PDFResourceManager()
device = PDFPageAggregator(rsrcmgr, laparams=laparams)
interpreter = PDFPageInterpreter(rsrcmgr, device)
@ -919,11 +752,9 @@ def get_text_objects(layout, ltype="char", t=None):
"""
if ltype == "char":
LTObject = LTChar
elif ltype == "image":
LTObject = LTImage
elif ltype == "horizontal_text":
elif ltype == "lh":
LTObject = LTTextLineHorizontal
elif ltype == "vertical_text":
elif ltype == "lv":
LTObject = LTTextLineVertical
if t is None:
t = []
@ -936,3 +767,27 @@ def get_text_objects(layout, ltype="char", t=None):
except AttributeError:
pass
return t
def merge_tuples(tuples):
"""Merges a list of overlapping tuples.
Parameters
----------
tuples : list
List of tuples where a tuple is a single axis coordinate pair.
Yields
------
tuple
"""
merged = list(tuples[0])
for s, e in tuples:
if s <= merged[1]:
merged[1] = max(merged[1], e)
else:
yield tuple(merged)
merged[0] = s
merged[1] = e
yield tuple(merged)

View File

@ -1,4 +0,0 @@
"Età dellAssicuratoallepoca del decesso","Misura % dimaggiorazione"
"18-75","1,00%"
"76-80","0,50%"
"81 in poi","0,10%"
1 Età dell’Assicuratoall’epoca del decesso Misura % dimaggiorazione
2 18-75 1,00%
3 76-80 0,50%
4 81 in poi 0,10%

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

View File

Before

Width:  |  Height:  |  Size: 8.1 KiB

After

Width:  |  Height:  |  Size: 8.1 KiB

View File

Before

Width:  |  Height:  |  Size: 8.8 KiB

After

Width:  |  Height:  |  Size: 8.8 KiB

View File

Before

Width:  |  Height:  |  Size: 64 KiB

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

View File

@ -1,5 +1,5 @@
<style type="text/css">
div.section h1 {font-size: 210%;}
div.section h1 {font-size: 225%;}
/* "Quick Search" should be capitalized. */
div#searchbox h3 {text-transform: capitalize;}
/* Make the document a little wider, less code is cut-off. */

View File

@ -4,13 +4,13 @@
</a>
</p>
<p>
<iframe src="https://ghbtns.com/github-btn.html?user=camelot-dev&repo=camelot&type=watch&count=true&size=large"
<iframe src="https://ghbtns.com/github-btn.html?user=socialcopsdev&repo=camelot&type=watch&count=true&size=large"
allowtransparency="true" frameborder="0" scrolling="0" width="200px" height="35px"></iframe>
</p>
<h3>Useful Links</h3>
<ul>
<li><a href="https://github.com/camelot-dev/camelot">Camelot @ GitHub</a></li>
<li><a href="https://github.com/socialcopsdev/camelot">Camelot @ GitHub</a></li>
<li><a href="https://pypi.org/project/camelot-py/">Camelot @ PyPI</a></li>
<li><a href="https://github.com/camelot-dev/camelot/issues">Issue Tracker</a></li>
</ul>
<li><a href="https://github.com/socialcopsdev/camelot/issues">Issue Tracker</a></li>
</ul>

View File

@ -4,6 +4,6 @@
</a>
</p>
<p>
<iframe src="https://ghbtns.com/github-btn.html?user=camelot-dev&repo=camelot&type=watch&count=true&size=large"
<iframe src="https://ghbtns.com/github-btn.html?user=socialcopsdev&repo=camelot&type=watch&count=true&size=large"
allowtransparency="true" frameborder="0" scrolling="0" width="200px" height="35px"></iframe>
</p>
</p>

View File

@ -1,19 +1,7 @@
# flasky pygments style based on tango style
from pygments.style import Style
from pygments.token import (
Keyword,
Name,
Comment,
String,
Error,
Number,
Operator,
Generic,
Whitespace,
Punctuation,
Other,
Literal,
)
from pygments.token import Keyword, Name, Comment, String, Error, \
Number, Operator, Generic, Whitespace, Punctuation, Other, Literal
class FlaskyStyle(Style):
@ -22,68 +10,77 @@ class FlaskyStyle(Style):
styles = {
# No corresponding class for the following:
# Text: "", # class: ''
Whitespace: "underline #f8f8f8", # class: 'w'
Error: "#a40000 border:#ef2929", # class: 'err'
Other: "#000000", # class 'x'
Comment: "italic #8f5902", # class: 'c'
Comment.Preproc: "noitalic", # class: 'cp'
Keyword: "bold #004461", # class: 'k'
Keyword.Constant: "bold #004461", # class: 'kc'
Keyword.Declaration: "bold #004461", # class: 'kd'
Keyword.Namespace: "bold #004461", # class: 'kn'
Keyword.Pseudo: "bold #004461", # class: 'kp'
Keyword.Reserved: "bold #004461", # class: 'kr'
Keyword.Type: "bold #004461", # class: 'kt'
Operator: "#582800", # class: 'o'
Operator.Word: "bold #004461", # class: 'ow' - like keywords
Punctuation: "bold #000000", # class: 'p'
#Text: "", # class: ''
Whitespace: "underline #f8f8f8", # class: 'w'
Error: "#a40000 border:#ef2929", # class: 'err'
Other: "#000000", # class 'x'
Comment: "italic #8f5902", # class: 'c'
Comment.Preproc: "noitalic", # class: 'cp'
Keyword: "bold #004461", # class: 'k'
Keyword.Constant: "bold #004461", # class: 'kc'
Keyword.Declaration: "bold #004461", # class: 'kd'
Keyword.Namespace: "bold #004461", # class: 'kn'
Keyword.Pseudo: "bold #004461", # class: 'kp'
Keyword.Reserved: "bold #004461", # class: 'kr'
Keyword.Type: "bold #004461", # class: 'kt'
Operator: "#582800", # class: 'o'
Operator.Word: "bold #004461", # class: 'ow' - like keywords
Punctuation: "bold #000000", # class: 'p'
# because special names such as Name.Class, Name.Function, etc.
# are not recognized as such later in the parsing, we choose them
# to look the same as ordinary variables.
Name: "#000000", # class: 'n'
Name.Attribute: "#c4a000", # class: 'na' - to be revised
Name.Builtin: "#004461", # class: 'nb'
Name.Builtin.Pseudo: "#3465a4", # class: 'bp'
Name.Class: "#000000", # class: 'nc' - to be revised
Name.Constant: "#000000", # class: 'no' - to be revised
Name.Decorator: "#888", # class: 'nd' - to be revised
Name.Entity: "#ce5c00", # class: 'ni'
Name.Exception: "bold #cc0000", # class: 'ne'
Name.Function: "#000000", # class: 'nf'
Name.Property: "#000000", # class: 'py'
Name.Label: "#f57900", # class: 'nl'
Name.Namespace: "#000000", # class: 'nn' - to be revised
Name.Other: "#000000", # class: 'nx'
Name.Tag: "bold #004461", # class: 'nt' - like a keyword
Name.Variable: "#000000", # class: 'nv' - to be revised
Name.Variable.Class: "#000000", # class: 'vc' - to be revised
Name.Variable.Global: "#000000", # class: 'vg' - to be revised
Name.Variable.Instance: "#000000", # class: 'vi' - to be revised
Number: "#990000", # class: 'm'
Literal: "#000000", # class: 'l'
Literal.Date: "#000000", # class: 'ld'
String: "#4e9a06", # class: 's'
String.Backtick: "#4e9a06", # class: 'sb'
String.Char: "#4e9a06", # class: 'sc'
String.Doc: "italic #8f5902", # class: 'sd' - like a comment
String.Double: "#4e9a06", # class: 's2'
String.Escape: "#4e9a06", # class: 'se'
String.Heredoc: "#4e9a06", # class: 'sh'
String.Interpol: "#4e9a06", # class: 'si'
String.Other: "#4e9a06", # class: 'sx'
String.Regex: "#4e9a06", # class: 'sr'
String.Single: "#4e9a06", # class: 's1'
String.Symbol: "#4e9a06", # class: 'ss'
Generic: "#000000", # class: 'g'
Generic.Deleted: "#a40000", # class: 'gd'
Generic.Emph: "italic #000000", # class: 'ge'
Generic.Error: "#ef2929", # class: 'gr'
Generic.Heading: "bold #000080", # class: 'gh'
Generic.Inserted: "#00A000", # class: 'gi'
Generic.Output: "#888", # class: 'go'
Generic.Prompt: "#745334", # class: 'gp'
Generic.Strong: "bold #000000", # class: 'gs'
Generic.Subheading: "bold #800080", # class: 'gu'
Generic.Traceback: "bold #a40000", # class: 'gt'
}
Name: "#000000", # class: 'n'
Name.Attribute: "#c4a000", # class: 'na' - to be revised
Name.Builtin: "#004461", # class: 'nb'
Name.Builtin.Pseudo: "#3465a4", # class: 'bp'
Name.Class: "#000000", # class: 'nc' - to be revised
Name.Constant: "#000000", # class: 'no' - to be revised
Name.Decorator: "#888", # class: 'nd' - to be revised
Name.Entity: "#ce5c00", # class: 'ni'
Name.Exception: "bold #cc0000", # class: 'ne'
Name.Function: "#000000", # class: 'nf'
Name.Property: "#000000", # class: 'py'
Name.Label: "#f57900", # class: 'nl'
Name.Namespace: "#000000", # class: 'nn' - to be revised
Name.Other: "#000000", # class: 'nx'
Name.Tag: "bold #004461", # class: 'nt' - like a keyword
Name.Variable: "#000000", # class: 'nv' - to be revised
Name.Variable.Class: "#000000", # class: 'vc' - to be revised
Name.Variable.Global: "#000000", # class: 'vg' - to be revised
Name.Variable.Instance: "#000000", # class: 'vi' - to be revised
Number: "#990000", # class: 'm'
Literal: "#000000", # class: 'l'
Literal.Date: "#000000", # class: 'ld'
String: "#4e9a06", # class: 's'
String.Backtick: "#4e9a06", # class: 'sb'
String.Char: "#4e9a06", # class: 'sc'
String.Doc: "italic #8f5902", # class: 'sd' - like a comment
String.Double: "#4e9a06", # class: 's2'
String.Escape: "#4e9a06", # class: 'se'
String.Heredoc: "#4e9a06", # class: 'sh'
String.Interpol: "#4e9a06", # class: 'si'
String.Other: "#4e9a06", # class: 'sx'
String.Regex: "#4e9a06", # class: 'sr'
String.Single: "#4e9a06", # class: 's1'
String.Symbol: "#4e9a06", # class: 'ss'
Generic: "#000000", # class: 'g'
Generic.Deleted: "#a40000", # class: 'gd'
Generic.Emph: "italic #000000", # class: 'ge'
Generic.Error: "#ef2929", # class: 'gr'
Generic.Heading: "bold #000080", # class: 'gh'
Generic.Inserted: "#00A000", # class: 'gi'
Generic.Output: "#888", # class: 'go'
Generic.Prompt: "#745334", # class: 'gp'
Generic.Strong: "bold #000000", # class: 'gs'
Generic.Subheading: "bold #800080", # class: 'gu'
Generic.Traceback: "bold #a40000", # class: 'gt'
}

View File

@ -1,96 +0,0 @@
"0","1","2","3","4","5","6","7","8","9","10"
"Sl.
No.","District","n
o
i
t
a
l3
opu2-1hs)
P1k
d 20 la
er n
cto(I
ef
j
o
r
P","%
8
8
o )
s
ult t tkh
dna
Aalen l
v(I
i
u
q
E",")
y
n a
umptiomentadult/donnes)
nsres/h t
ouimk
Cqga
al re00n L
ot 4(I
T @
(","menteds, age)nes)
uireg sewastton
qn h
Reudis &ak
al cld L
tnen
To(Ife(I","","","","",""
"","","","","","","f
i
r
a
h
K","i
b
a
R","l
a
t
o
T","e
c
i
R","y
d
d
a
P"
"1","Balasore","23.65","20.81","3.04","3.47","2.78","0.86","3.64","0.17","0.25"
"2","Bhadrak","15.34","13.50","1.97","2.25","3.50","0.05","3.55","1.30","1.94"
"3","Balangir","17.01","14.97","2.19","2.50","6.23","0.10","6.33","3.83","5.72"
"4","Subarnapur","6.70","5.90","0.86","0.98","4.48","1.13","5.61","4.63","6.91"
"5","Cuttack","26.63","23.43","3.42","3.91","3.75","0.06","3.81","-0.10","-0.15"
"6","Jagatsingpur","11.49","10.11","1.48","1.69","2.10","0.02","2.12","0.43","0.64"
"7","Jajpur","18.59","16.36","2.39","2.73","2.13","0.04","2.17","-0.56","-0.84"
"8","Kendrapara","14.62","12.87","1.88","2.15","2.60","0.07","2.67","0.52","0.78"
"9","Dhenkanal","12.13","10.67","1.56","1.78","2.26","0.02","2.28","0.50","0.75"
"10","Angul","12.93","11.38","1.66","1.90","1.73","0.02","1.75","-0.15","-0.22"
"11","Ganjam","35.77","31.48","4.60","5.26","4.57","0.00","4.57","-0.69","-1.03"
"12","Gajapati","5.85","5.15","0.75","0.86","0.68","0.01","0.69","-0.17","-0.25"
"13","Kalahandi","16.12","14.19","2.07","2.37","5.42","1.13","6.55","4.18","6.24"
"14","Nuapada","6.18","5.44","0.79","0.90","1.98","0.08","2.06","1.16","1.73"
"15","Keonjhar","18.42","16.21","2.37","2.71","2.76","0.08","2.84","0.13","0.19"
"16","Koraput","14.09","12.40","1.81","2.07","2.08","0.34","2.42","0.35","0.52"
"17","Malkangiri","6.31","5.55","0.81","0.93","1.78","0.04","1.82","0.89","1.33"
"18","Nabarangpur","12.50","11.00","1.61","1.84","3.26","0.02","3.28","1.44","2.15"
"19","Rayagada","9.83","8.65","1.26","1.44","1.15","0.03","1.18","-0.26","-0.39"
"20","Mayurbhanj","25.61","22.54","3.29","3.76","4.90","0.06","4.96","1.20","1.79"
"21","Kandhamal","7.45","6.56","0.96","1.10","0.70","0.01","0.71","-0.39","-0.58"
"22","Boudh","4.51","3.97","0.58","0.66","1.73","0.03","1.76","1.10","1.64"
"23","Puri","17.29","15.22","2.22","2.54","2.45","0.99","3.44","0.90","1.34"
"24","Khordha","23.08","20.31","2.97","3.39","2.02","0.03","2.05","-1.34","-2.00"
"25","Nayagarh","9.78","8.61","1.26","1.44","2.10","0.00","2.10","0.66","0.99"
"26","Sambalpur","10.62","9.35","1.37","1.57","3.45","0.71","4.16","2.59","3.87"
"27","Bargarh","15.00","13.20","1.93","2.21","6.87","2.65","9.52","7.31","10.91"
"28","Deogarh","3.18","2.80","0.41","0.47","1.12","0.07","1.19","0.72","1.07"
"29","Jharsuguda","5.91","5.20","0.76","0.87","0.99","0.01","1.00","0.13","0.19"
"30","","","18.66","2.72","3.11","4.72","0.02","4.74","1.63","2.43"
1 0 1 2 3 4 5 6 7 8 9 10
2 Sl. No. District n o i t a l3 opu2-1hs) P1k d 20 la er n cto(I ef j o r P % 8 8 o ) s ult t tkh dna Aalen l v(I i u q E ) y n a umptiomentadult/donnes) nsres/h t ouimk Cqga al re00n L ot 4(I T @ ( menteds, age)nes) uireg sewastton qn h Reudis &ak al cld L tnen To(Ife(I
3 f i r a h K i b a R l a t o T e c i R y d d a P
4 1 Balasore 23.65 20.81 3.04 3.47 2.78 0.86 3.64 0.17 0.25
5 2 Bhadrak 15.34 13.50 1.97 2.25 3.50 0.05 3.55 1.30 1.94
6 3 Balangir 17.01 14.97 2.19 2.50 6.23 0.10 6.33 3.83 5.72
7 4 Subarnapur 6.70 5.90 0.86 0.98 4.48 1.13 5.61 4.63 6.91
8 5 Cuttack 26.63 23.43 3.42 3.91 3.75 0.06 3.81 -0.10 -0.15
9 6 Jagatsingpur 11.49 10.11 1.48 1.69 2.10 0.02 2.12 0.43 0.64
10 7 Jajpur 18.59 16.36 2.39 2.73 2.13 0.04 2.17 -0.56 -0.84
11 8 Kendrapara 14.62 12.87 1.88 2.15 2.60 0.07 2.67 0.52 0.78
12 9 Dhenkanal 12.13 10.67 1.56 1.78 2.26 0.02 2.28 0.50 0.75
13 10 Angul 12.93 11.38 1.66 1.90 1.73 0.02 1.75 -0.15 -0.22
14 11 Ganjam 35.77 31.48 4.60 5.26 4.57 0.00 4.57 -0.69 -1.03
15 12 Gajapati 5.85 5.15 0.75 0.86 0.68 0.01 0.69 -0.17 -0.25
16 13 Kalahandi 16.12 14.19 2.07 2.37 5.42 1.13 6.55 4.18 6.24
17 14 Nuapada 6.18 5.44 0.79 0.90 1.98 0.08 2.06 1.16 1.73
18 15 Keonjhar 18.42 16.21 2.37 2.71 2.76 0.08 2.84 0.13 0.19
19 16 Koraput 14.09 12.40 1.81 2.07 2.08 0.34 2.42 0.35 0.52
20 17 Malkangiri 6.31 5.55 0.81 0.93 1.78 0.04 1.82 0.89 1.33
21 18 Nabarangpur 12.50 11.00 1.61 1.84 3.26 0.02 3.28 1.44 2.15
22 19 Rayagada 9.83 8.65 1.26 1.44 1.15 0.03 1.18 -0.26 -0.39
23 20 Mayurbhanj 25.61 22.54 3.29 3.76 4.90 0.06 4.96 1.20 1.79
24 21 Kandhamal 7.45 6.56 0.96 1.10 0.70 0.01 0.71 -0.39 -0.58
25 22 Boudh 4.51 3.97 0.58 0.66 1.73 0.03 1.76 1.10 1.64
26 23 Puri 17.29 15.22 2.22 2.54 2.45 0.99 3.44 0.90 1.34
27 24 Khordha 23.08 20.31 2.97 3.39 2.02 0.03 2.05 -1.34 -2.00
28 25 Nayagarh 9.78 8.61 1.26 1.44 2.10 0.00 2.10 0.66 0.99
29 26 Sambalpur 10.62 9.35 1.37 1.57 3.45 0.71 4.16 2.59 3.87
30 27 Bargarh 15.00 13.20 1.93 2.21 6.87 2.65 9.52 7.31 10.91
31 28 Deogarh 3.18 2.80 0.41 0.47 1.12 0.07 1.19 0.72 1.07
32 29 Jharsuguda 5.91 5.20 0.76 0.87 0.99 0.01 1.00 0.13 0.19
33 30 18.66 2.72 3.11 4.72 0.02 4.74 1.63 2.43

View File

@ -1,56 +0,0 @@
"0","1","2","3","4","5","6","7"
"Rate of Accidental Deaths & Suicides and Population Growth During 1967 to 2013","","","","","","",""
"Sl.
No.","Year","Population
(in Lakh)","Accidental Deaths","","Suicides","","Percentage
Population
growth"
"","","","Incidence","Rate","Incidence","Rate",""
"(1)","(2)","(3)","(4)","(5)","(6)","(7)","(8)"
"1.","1967","4999","126762","25.4","38829","7.8","2.2"
"2.","1968","5111","126232","24.7","40688","8.0","2.2"
"3.","1969","5225","130755","25.0","43633","8.4","2.2"
"4.","1970","5343","139752","26.2","48428","9.1","2.3"
"5.","1971","5512","105601","19.2","43675","7.9","3.2"
"6.","1972","5635","106184","18.8","43601","7.7","2.2"
"7.","1973","5759","130654","22.7","40807","7.1","2.2"
"8.","1974","5883","110624","18.8","46008","7.8","2.2"
"9.","1975","6008","113016","18.8","42890","7.1","2.1"
"10.","1976","6136","111611","18.2","41415","6.7","2.1"
"11.","1977","6258","117338","18.8","39718","6.3","2.0"
"12.","1978","6384","118594","18.6","40207","6.3","2.0"
"13.","1979","6510","108987","16.7","38217","5.9","2.0"
"14.","1980","6636","116912","17.6","41663","6.3","1.9"
"15.","1981","6840","122221","17.9","40245","5.9","3.1"
"16.","1982","7052","125993","17.9","44732","6.3","3.1"
"17.","1983","7204","128576","17.8","46579","6.5","2.2"
"18.","1984","7356","134628","18.3","50571","6.9","2.1"
"19.","1985","7509","139657","18.6","52811","7.0","2.1"
"20.","1986","7661","147023","19.2","54357","7.1","2.0"
"21.","1987","7814","152314","19.5","58568","7.5","2.0"
"22.","1988","7966","163522","20.5","64270","8.1","1.9"
"23.","1989","8118","169066","20.8","68744","8.5","1.9"
"24.","1990","8270","174401","21.1","73911","8.9","1.9"
"25.","1991","8496","188003","22.1","78450","9.2","2.7"
"26.","1992","8677","194910","22.5","80149","9.2","2.1"
"27.","1993","8838","192357","21.8","84244","9.5","1.9"
"28.","1994","8997","190435","21.2","89195","9.9","1.8"
"29.","1995","9160","222487","24.3","89178","9.7","1.8"
"30.","1996","9319","220094","23.6","88241","9.5","1.7"
"31.","1997","9552","233903","24.5","95829","10.0","2.5"
"32.","1998","9709","258409","26.6","104713","10.8","1.6"
"33.","1999","9866","271918","27.6","110587","11.2","1.6"
"34.","2000","10021","255883","25.5","108593","10.8","1.6"
"35.","2001","10270","271019","26.4","108506","10.6","2.5"
"36.","2002","10506","260122","24.8","110417","10.5","2.3"
"37.","2003","10682","259625","24.3","110851","10.4","1.7"
"38.","2004","10856","277263","25.5","113697","10.5","1.6"
"39.","2005","11028","294175","26.7","113914","10.3","1.6"
"40.","2006","11198","314704","28.1","118112","10.5","1.5"
"41.","2007","11366","340794","30.0","122637","10.8","1.5"
"42.","2008","11531","342309","29.7","125017","10.8","1.4"
"43.","2009","11694","357021","30.5","127151","10.9","1.4"
"44.","2010","11858","384649","32.4","134599","11.4","1.4"
"45.","2011","12102","390884","32.3","135585","11.2","2.1"
"46.","2012","12134","394982","32.6","135445","11.2","1.0"
"47.","2013","12288","400517","32.6","134799","11.0","1.0"
1 0 1 2 3 4 5 6 7
2 Rate of Accidental Deaths & Suicides and Population Growth During 1967 to 2013
3 Sl. No. Year Population (in Lakh) Accidental Deaths Suicides Percentage Population growth
4 Incidence Rate Incidence Rate
5 (1) (2) (3) (4) (5) (6) (7) (8)
6 1. 1967 4999 126762 25.4 38829 7.8 2.2
7 2. 1968 5111 126232 24.7 40688 8.0 2.2
8 3. 1969 5225 130755 25.0 43633 8.4 2.2
9 4. 1970 5343 139752 26.2 48428 9.1 2.3
10 5. 1971 5512 105601 19.2 43675 7.9 3.2
11 6. 1972 5635 106184 18.8 43601 7.7 2.2
12 7. 1973 5759 130654 22.7 40807 7.1 2.2
13 8. 1974 5883 110624 18.8 46008 7.8 2.2
14 9. 1975 6008 113016 18.8 42890 7.1 2.1
15 10. 1976 6136 111611 18.2 41415 6.7 2.1
16 11. 1977 6258 117338 18.8 39718 6.3 2.0
17 12. 1978 6384 118594 18.6 40207 6.3 2.0
18 13. 1979 6510 108987 16.7 38217 5.9 2.0
19 14. 1980 6636 116912 17.6 41663 6.3 1.9
20 15. 1981 6840 122221 17.9 40245 5.9 3.1
21 16. 1982 7052 125993 17.9 44732 6.3 3.1
22 17. 1983 7204 128576 17.8 46579 6.5 2.2
23 18. 1984 7356 134628 18.3 50571 6.9 2.1
24 19. 1985 7509 139657 18.6 52811 7.0 2.1
25 20. 1986 7661 147023 19.2 54357 7.1 2.0
26 21. 1987 7814 152314 19.5 58568 7.5 2.0
27 22. 1988 7966 163522 20.5 64270 8.1 1.9
28 23. 1989 8118 169066 20.8 68744 8.5 1.9
29 24. 1990 8270 174401 21.1 73911 8.9 1.9
30 25. 1991 8496 188003 22.1 78450 9.2 2.7
31 26. 1992 8677 194910 22.5 80149 9.2 2.1
32 27. 1993 8838 192357 21.8 84244 9.5 1.9
33 28. 1994 8997 190435 21.2 89195 9.9 1.8
34 29. 1995 9160 222487 24.3 89178 9.7 1.8
35 30. 1996 9319 220094 23.6 88241 9.5 1.7
36 31. 1997 9552 233903 24.5 95829 10.0 2.5
37 32. 1998 9709 258409 26.6 104713 10.8 1.6
38 33. 1999 9866 271918 27.6 110587 11.2 1.6
39 34. 2000 10021 255883 25.5 108593 10.8 1.6
40 35. 2001 10270 271019 26.4 108506 10.6 2.5
41 36. 2002 10506 260122 24.8 110417 10.5 2.3
42 37. 2003 10682 259625 24.3 110851 10.4 1.7
43 38. 2004 10856 277263 25.5 113697 10.5 1.6
44 39. 2005 11028 294175 26.7 113914 10.3 1.6
45 40. 2006 11198 314704 28.1 118112 10.5 1.5
46 41. 2007 11366 340794 30.0 122637 10.8 1.5
47 42. 2008 11531 342309 29.7 125017 10.8 1.4
48 43. 2009 11694 357021 30.5 127151 10.9 1.4
49 44. 2010 11858 384649 32.4 134599 11.4 1.4
50 45. 2011 12102 390884 32.3 135585 11.2 2.1
51 46. 2012 12134 394982 32.6 135445 11.2 1.0
52 47. 2013 12288 400517 32.6 134799 11.0 1.0

View File

@ -1,18 +0,0 @@
"0","1","2"
"","e
bl
a
ail
v
a
t
o
n
a
t
a
D
*",""
1 0 1 2
2 e bl a ail v a t o n a t a D *

View File

@ -1,3 +0,0 @@
"0"
"Sl."
"No."
1 0
2 Sl.
3 No.

View File

@ -1,3 +0,0 @@
"0"
"Table 6 : DISTRIBUTION (%) OF HOUSEHOLDS BY LITERACY STATUS OF"
"MALE HEAD OF THE HOUSEHOLD"
1 0
2 Table 6 : DISTRIBUTION (%) OF HOUSEHOLDS BY LITERACY STATUS OF
3 MALE HEAD OF THE HOUSEHOLD

View File

@ -1,7 +1,3 @@
"[In thousands (11,062.6 represents 11,062,600) For year ending December 31. Based on Uniform Crime Reporting (UCR)","","","","","","","","",""
"Program. Represents arrests reported (not charged) by 12,910 agencies with a total population of 247,526,916 as estimated","","","","","","","","",""
"by the FBI. Some persons may be arrested more than once during a year, therefore, the data in this table, in some cases,","","","","","","","","",""
"could represent multiple arrests of the same person. See text, this section and source]","","","","","","","","",""
"","","Total","","","Male","","","Female",""
"Offense charged","","Under 18","18 years","","Under 18","18 years","","Under 18","18 years"
"","Total","years","and over","Total","years","and over","Total","years","and over"
@ -40,4 +36,3 @@
"Curfew and loitering law violations ..","91.0","91.0","(X)","63.1","63.1","(X)","28.0","28.0","(X)"
"Runaways . . . . . . . .. .. .. .. .. ....","75.8","75.8","(X)","34.0","34.0","(X)","41.8","41.8","(X)"
""," Represents zero. X Not applicable. 1 Buying, receiving, possessing stolen property. 2 Except forcible rape and prostitution.","","","","","","","",""
"","Source: U.S. Department of Justice, Federal Bureau of Investigation, Uniform Crime Reports, Arrests Master Files.","","","","","","","",""

1 [In thousands (11,062.6 represents 11,062,600) For year ending December 31. Based on Uniform Crime Reporting (UCR) Total Male Female
[In thousands (11,062.6 represents 11,062,600) For year ending December 31. Based on Uniform Crime Reporting (UCR)
Program. Represents arrests reported (not charged) by 12,910 agencies with a total population of 247,526,916 as estimated
by the FBI. Some persons may be arrested more than once during a year, therefore, the data in this table, in some cases,
could represent multiple arrests of the same person. See text, this section and source]
1 Total Total Male Male Female Female
2 Offense charged Offense charged Under 18 Under 18 18 years Under 18 Under 18 18 years 18 years Under 18 Under 18 18 years 18 years
3 Total years Total years and over Total years years and over and over Total years years and over Total and over
36 Curfew and loitering law violations .. 91.0 Curfew and loitering law violations .. 91.0 91.0 (X) 63.1 63.1 63.1 (X) (X) 28.0 28.0 28.0 (X) 28.0 (X)
37 Runaways . . . . . . . .. .. .. .. .. .... 75.8 Runaways . . . . . . . .. .. .. .. .. .... 75.8 75.8 (X) 34.0 34.0 34.0 (X) (X) 41.8 41.8 41.8 (X) 41.8 (X)
38 – Represents zero. X Not applicable. 1 Buying, receiving, possessing stolen property. 2 Except forcible rape and prostitution. – Represents zero. X Not applicable. 1 Buying, receiving, possessing stolen property. 2 Except forcible rape and prostitution.
Source: U.S. Department of Justice, Federal Bureau of Investigation, Uniform Crime Reports, Arrests Master Files.

View File

@ -1,7 +1,3 @@
"","Source: U.S. Department of Justice, Federal Bureau of Investigation, Uniform Crime Reports, Arrests Master Files.","","","",""
"Table 325. Arrests by Race: 2009","","","","",""
"[Based on Uniform Crime Reporting (UCR) Program. Represents arrests reported (not charged) by 12,371 agencies","","","","",""
"with a total population of 239,839,971 as estimated by the FBI. See headnote, Table 324]","","","","",""
"","","","","American",""
"Offense charged","","","","Indian/Alaskan","Asian Pacific"
"","Total","White","Black","Native","Islander"
@ -38,4 +34,3 @@
"Curfew and loitering law violations . .. ... .. ....","89,578","54,439","33,207","872","1,060"
"Runaways . . . . . . . .. .. .. .. .. .. .... .. ..... .","73,616","48,343","19,670","1,653","3,950"
"1 Except forcible rape and prostitution.","","","","",""
"","Source: U.S. Department of Justice, Federal Bureau of Investigation, “Crime in the United States, Arrests,” September 2010,","","","",""

1 Source: U.S. Department of Justice, Federal Bureau of Investigation, Uniform Crime Reports, Arrests Master Files. American
Source: U.S. Department of Justice, Federal Bureau of Investigation, Uniform Crime Reports, Arrests Master Files.
Table 325. Arrests by Race: 2009
[Based on Uniform Crime Reporting (UCR) Program. Represents arrests reported (not charged) by 12,371 agencies
with a total population of 239,839,971 as estimated by the FBI. See headnote, Table 324]
1 American American
2 Offense charged Indian/Alaskan Indian/Alaskan Asian Pacific
3 Total White Total Black White Native Black Native Islander
34 Curfew and loitering law violations . .. ... .. .... 89,578 54,439 89,578 33,207 54,439 872 33,207 872 1,060
35 Runaways . . . . . . . .. .. .. .. .. .. .... .. ..... . 73,616 48,343 73,616 19,670 48,343 1,653 19,670 1,653 3,950
36 1 Except forcible rape and prostitution.
Source: U.S. Department of Justice, Federal Bureau of Investigation, “Crime in the United States, Arrests,” September 2010,

View File

@ -1,43 +1,35 @@
"","2012 BETTER VARIETIES Harvest Report for Minnesota Central [ MNCE ]2012 BETTER VARIETIES Harvest Report for Minnesota Central [ MNCE ]","","","","","","","","","","","","ALL SEASON TESTALL SEASON TEST",""
"","Doug Toreen, Renville County, MN 55310 [ BIRD ISLAND ]Doug Toreen, Renville County, MN 55310","","","","","[ BIRD ISLAND ]","","","","","","","1.3 - 2.0 MAT. GROUP1.3 - 2.0 MAT. GROUP",""
"PREVPREV. CROP/HERB:","CROP/HERB","C/ S","Corn / Surpass, RoundupR","d","","","","","","","","","","S2MNCE01S2MNCE01"
"SOIL DESCRIPTION:","","C","Canisteo clay loam, mod. well drained, non-irrigated","","","","","","","","","","",""
"SOIL CONDITIONS:","","","High P, high K, 6.7 pH, 3.9% OM, Low SCN","","","","","","","","","","","30"" ROW SPACING"
"TILLAGE/CULTIVATION:TILLAGE/CULTIVATION:","","","conventional w/ fall tillconventional w/ fall till","","","","","","","","","","",""
"PEST MANAGEMENT:PEST MANAGEMENT:","","Roundup twiceRoundup twice","","","","","","","","","","","",""
"SEEDED - RATE:","","May 15M15","140,000 /A140 000 /A","","","","","","","","TOP 30 foTOP 30 for YIELD of 63 TESTED","","YIELD of 63 TESTED",""
"HARVESTEDHARVESTED - STAND:","STAND","O t 3Oct 3","122 921 /A122,921 /A","","","","","","","","","AVERAGE of (3) REPLICATIONSAVERAGE of (3) REPLICATIONS","",""
"","","","","","","SCN","Seed","Yield","Moisture","Lodgingg","g","Stand","","Gross"
"","Company/Brandpy","Product/Brand†","","Technol.†","Mat.","Resist.","Trmt.†","Bu/A","%","%","","(x 1000)(",")","Income"
"","KrugerKruger","K2-1901K2 1901","","RR2YRR2Y","1.91.9","RR","Ac,PVAc,PV","56.456.4","7.67.6","00","","126.3126.3","","$846$846"
"","StineStine","19RA02 §19RA02 §","","RR2YRR2Y","1 91.9","RR","CMBCMB","55.355.3","7 67.6","00","","120 0120.0","","$830$830"
"","WensmanWensman","W 3190NR2W 3190NR2","","RR2YRR2Y","1 91.9","RR","AcAc","54 554.5","7 67.6","00","","119 5119.5","","$818$818"
"","H ftHefty","H17Y12H17Y12","","RR2YRR2Y","1 71.7","MRMR","II","53 753.7","7 77.7","00","","124 4124.4","","$806$806"
"","Dyna-Gro","S15RY53","","RR2Y","1.5","R","Ac","53.6","7.7","0","","126.8","","$804"
"","LG SeedsLG Seeds","C2050R2C2050R2","","RR2YRR2Y","2.12.1","RR","AcAc","53.653.6","7.77.7","00","","123.9123.9","","$804$804"
"","Titan ProTitan Pro","19M4219M42","","RR2YRR2Y","1.91.9","RR","CMBCMB","53.653.6","7.77.7","00","","121.0121.0","","$804$804"
"","StineStine","19RA02 (2) §19RA02 (2) §","","RR2YRR2Y","1 91.9","RR","CMBCMB","53 453.4","7 77.7","00","","123 9123.9","","$801$801"
"","AsgrowAsgrow","AG1832 §AG1832 §","","RR2YRR2Y","1 81.8","MRMR","Ac PVAc,PV","52 952.9","7 77.7","00","","122 0122.0","","$794$794"
"","Prairie Brandiid","PB-1566R2662","","RR2Y2","1.5","R","CMB","52.8","7.7","0","","122.9","","$792$"
"","Channel","1901R2","","RR2Y","1.9","R","Ac,PV,","52.8","7.6","0","","123.4","","$791$"
"","Titan ProTitan Pro","20M120M1","","RR2YRR2Y","2.02.0","RR","AmAm","52.552.5","7.57.5","00","","124.4124.4","","$788$788"
"","KrugerKruger","K2-2002K2-2002","","RR2YRR2Y","2 02.0","RR","Ac PVAc,PV","52 452.4","7 97.9","00","","125 4125.4","","$786$786"
"","ChannelChannel","1700R21700R2","","RR2YRR2Y","1 71.7","RR","Ac PVAc,PV","52 352.3","7 97.9","00","","123 9123.9","","$784$784"
"","H ftHefty","H16Y11H16Y11","","RR2YRR2Y","1 61.6","MRMR","II","51 451.4","7 67.6","00","","123 9123.9","","$771$771"
"","Anderson","162R2Y","","RR2Y","1.6","R","None","51.3","7.5","0","","119.5","","$770"
"","Titan ProTitan Pro","15M2215M22","","RR2YRR2Y","1.51.5","RR","CMBCMB","51.351.3","7.87.8","00","","125.4125.4","","$769$769"
"","DairylandDairyland","DSR-1710R2YDSR-1710R2Y","","RR2YRR2Y","1 71.7","RR","CMBCMB","51 351.3","7 77.7","00","","122 0122.0","","$769$769"
"","HeftyHefty","H20R3H20R3","","RR2YRR2Y","2 02.0","MRMR","II","50 550.5","8 28.2","00","","121 0121.0","","$757$757"
"","PPrairie BrandiiBd","PB 1743R2PB-1743R2","","RR2YRR2Y","1 71.7","RR","CMBCMB","50 250.2","7 77.7","00","","125 8125.8","","$752$752"
"","Gold Country","1741","","RR2Y","1.7","R","Ac","50.1","7.8","0","","123.9","","$751"
"","Trelaye ay","20RR4303","","RR2Y","2.00","R","Ac,Exc,","49.99 9","7.66","00","","127.88","","$749$9"
"","HeftyHefty","H14R3H14R3","","RR2YRR2Y","1.41.4","MRMR","II","49.749.7","7.77.7","00","","122.9122.9","","$746$746"
"","Prairie BrandPrairie Brand","PB-2099NRR2PB-2099NRR2","","RR2YRR2Y","2 02.0","RR","CMBCMB","49 649.6","7 87.8","00","","126 3126.3","","$743$743"
"","WensmanWensman","W 3174NR2W 3174NR2","","RR2YRR2Y","1 71.7","RR","AcAc","49 349.3","7 67.6","00","","122 5122.5","","$740$740"
"","KKruger","K2 1602K2-1602","","RR2YRR2Y","1 61.6","R","Ac,PV","48.78","7.66","00","","125.412","","$731$31"
"","NK Brand","S18-C2 §§","","RR2Y","1.8","R","CMB","48.7","7.7","0","","126.8","","$731$"
"","KrugerKruger","K2-1902K2 1902","","RR2YRR2Y","1.91.9","RR","Ac,PVAc,PV","48.748.7","7.57.5","00","","124.4124.4","","$730$730"
"","Prairie BrandPrairie Brand","PB-1823R2PB-1823R2","","RR2YRR2Y","1 81.8","RR","NoneNone","48 548.5","7 67.6","00","","121 0121.0","","$727$727"
"","Gold CountryGold Country","15411541","","RR2YRR2Y","1 51.5","RR","AcAc","48 448.4","7 67.6","00","","110 4110.4","","$726$726"
"","","","","","","","Test Average =","47 647.6","7 77.7","00","","122 9122.9","","$713$713"
"","","","","","","","LSD (0.10) =","5.7","0.3","ns","","37.8","","566.4"
"","","","","","SCN","Seed","Yield","Moisture","Lodgingg","g","Stand","","Gross"
"Company/Brandpy","","Product/Brand†","Technol.†","Mat.","Resist.","Trmt.†","Bu/A","%","%","","(x 1000)(",")","Income"
"KrugerKruger","","K2-1901K2 1901","RR2YRR2Y","1.91.9","RR","Ac,PVAc,PV","56.456.4","7.67.6","00","","126.3126.3","","$846$846"
"StineStine","","19RA02 §19RA02 §","RR2YRR2Y","1 91.9","RR","CMBCMB","55.355.3","7 67.6","00","","120 0120.0","","$830$830"
"WensmanWensman","","W 3190NR2W 3190NR2","RR2YRR2Y","1 91.9","RR","AcAc","54 554.5","7 67.6","00","","119 5119.5","","$818$818"
"H ftHefty","","H17Y12H17Y12","RR2YRR2Y","1 71.7","MRMR","II","53 753.7","7 77.7","00","","124 4124.4","","$806$806"
"Dyna-Gro","","S15RY53","RR2Y","1.5","R","Ac","53.6","7.7","0","","126.8","","$804"
"LG SeedsLG Seeds","","C2050R2C2050R2","RR2YRR2Y","2.12.1","RR","AcAc","53.653.6","7.77.7","00","","123.9123.9","","$804$804"
"Titan ProTitan Pro","","19M4219M42","RR2YRR2Y","1.91.9","RR","CMBCMB","53.653.6","7.77.7","00","","121.0121.0","","$804$804"
"StineStine","","19RA02 (2) §19RA02 (2) §","RR2YRR2Y","1 91.9","RR","CMBCMB","53 453.4","7 77.7","00","","123 9123.9","","$801$801"
"AsgrowAsgrow","","AG1832 §AG1832 §","RR2YRR2Y","1 81.8","MRMR","Ac PVAc,PV","52 952.9","7 77.7","00","","122 0122.0","","$794$794"
"Prairie Brandiid","","PB-1566R2662","RR2Y2","1.5","R","CMB","52.8","7.7","0","","122.9","","$792$"
"Channel","","1901R2","RR2Y","1.9","R","Ac,PV,","52.8","7.6","0","","123.4","","$791$"
"Titan ProTitan Pro","","20M120M1","RR2YRR2Y","2.02.0","RR","AmAm","52.552.5","7.57.5","00","","124.4124.4","","$788$788"
"KrugerKruger","","K2-2002K2-2002","RR2YRR2Y","2 02.0","RR","Ac PVAc,PV","52 452.4","7 97.9","00","","125 4125.4","","$786$786"
"ChannelChannel","","1700R21700R2","RR2YRR2Y","1 71.7","RR","Ac PVAc,PV","52 352.3","7 97.9","00","","123 9123.9","","$784$784"
"H ftHefty","","H16Y11H16Y11","RR2YRR2Y","1 61.6","MRMR","II","51 451.4","7 67.6","00","","123 9123.9","","$771$771"
"Anderson","","162R2Y","RR2Y","1.6","R","None","51.3","7.5","0","","119.5","","$770"
"Titan ProTitan Pro","","15M2215M22","RR2YRR2Y","1.51.5","RR","CMBCMB","51.351.3","7.87.8","00","","125.4125.4","","$769$769"
"DairylandDairyland","","DSR-1710R2YDSR-1710R2Y","RR2YRR2Y","1 71.7","RR","CMBCMB","51 351.3","7 77.7","00","","122 0122.0","","$769$769"
"HeftyHefty","","H20R3H20R3","RR2YRR2Y","2 02.0","MRMR","II","50 550.5","8 28.2","00","","121 0121.0","","$757$757"
"PPrairie BrandiiBd","","PB 1743R2PB-1743R2","RR2YRR2Y","1 71.7","RR","CMBCMB","50 250.2","7 77.7","00","","125 8125.8","","$752$752"
"Gold Country","","1741","RR2Y","1.7","R","Ac","50.1","7.8","0","","123.9","","$751"
"Trelaye ay","","20RR4303","RR2Y","2.00","R","Ac,Exc,","49.99 9","7.66","00","","127.88","","$749$9"
"HeftyHefty","","H14R3H14R3","RR2YRR2Y","1.41.4","MRMR","II","49.749.7","7.77.7","00","","122.9122.9","","$746$746"
"Prairie BrandPrairie Brand","","PB-2099NRR2PB-2099NRR2","RR2YRR2Y","2 02.0","RR","CMBCMB","49 649.6","7 87.8","00","","126 3126.3","","$743$743"
"WensmanWensman","","W 3174NR2W 3174NR2","RR2YRR2Y","1 71.7","RR","AcAc","49 349.3","7 67.6","00","","122 5122.5","","$740$740"
"KKruger","","K2 1602K2-1602","RR2YRR2Y","1 61.6","R","Ac,PV","48.78","7.66","00","","125.412","","$731$31"
"NK Brand","","S18-C2 §§","RR2Y","1.8","R","CMB","48.7","7.7","0","","126.8","","$731$"
"KrugerKruger","","K2-1902K2 1902","RR2YRR2Y","1.91.9","RR","Ac,PVAc,PV","48.748.7","7.57.5","00","","124.4124.4","","$730$730"
"Prairie BrandPrairie Brand","","PB-1823R2PB-1823R2","RR2YRR2Y","1 81.8","RR","NoneNone","48 548.5","7 67.6","00","","121 0121.0","","$727$727"
"Gold CountryGold Country","","15411541","RR2YRR2Y","1 51.5","RR","AcAc","48 448.4","7 67.6","00","","110 4110.4","","$726$726"
"","","","","","","Test Average =","47 647.6","7 77.7","00","","122 9122.9","","$713$713"
"","","","","","","LSD (0.10) =","5.7","0.3","ns","","37.8","","566.4"
"","F.I.R.S.T. Managerg","","","","","C.V. =","8.8","2.9","","","56.4","","846.2"

1 2012 BETTER VARIETIES Harvest Report for Minnesota Central [ MNCE ]2012 BETTER VARIETIES Harvest Report for Minnesota Central [ MNCE ] SCN Seed Yield Moisture Lodgingg g Stand ALL SEASON TESTALL SEASON TEST Gross
2 Company/Brandpy Doug Toreen, Renville County, MN 55310 [ BIRD ISLAND ]Doug Toreen, Renville County, MN 55310 Product/Brand† Technol.† Mat. Resist. Trmt.† Bu/A % % (x 1000)( [ BIRD ISLAND ] ) 1.3 - 2.0 MAT. GROUP1.3 - 2.0 MAT. GROUP Income
3 PREVPREV. CROP/HERB: KrugerKruger CROP/HERB C/ S Corn / Surpass, RoundupR K2-1901K2 1901 d RR2YRR2Y 1.91.9 RR Ac,PVAc,PV 56.456.4 7.67.6 00 126.3126.3 $846$846 S2MNCE01S2MNCE01
4 SOIL DESCRIPTION: StineStine C Canisteo clay loam, mod. well drained, non-irrigated 19RA02 §19RA02 § RR2YRR2Y 1 91.9 RR CMBCMB 55.355.3 7 67.6 00 120 0120.0 $830$830
5 SOIL CONDITIONS: WensmanWensman High P, high K, 6.7 pH, 3.9% OM, Low SCN W 3190NR2W 3190NR2 RR2YRR2Y 1 91.9 RR AcAc 54 554.5 7 67.6 00 119 5119.5 $818$818 30" ROW SPACING
6 TILLAGE/CULTIVATION:TILLAGE/CULTIVATION: H ftHefty conventional w/ fall tillconventional w/ fall till H17Y12H17Y12 RR2YRR2Y 1 71.7 MRMR II 53 753.7 7 77.7 00 124 4124.4 $806$806
7 PEST MANAGEMENT:PEST MANAGEMENT: Dyna-Gro Roundup twiceRoundup twice S15RY53 RR2Y 1.5 R Ac 53.6 7.7 0 126.8 $804
8 SEEDED - RATE: LG SeedsLG Seeds May 15M15 140,000 /A140 000 /A C2050R2C2050R2 RR2YRR2Y 2.12.1 RR AcAc 53.653.6 7.77.7 00 TOP 30 foTOP 30 for YIELD of 63 TESTED 123.9123.9 YIELD of 63 TESTED $804$804
9 HARVESTEDHARVESTED - STAND: Titan ProTitan Pro STAND O t 3Oct 3 122 921 /A122,921 /A 19M4219M42 RR2YRR2Y 1.91.9 RR CMBCMB 53.653.6 7.77.7 00 121.0121.0 AVERAGE of (3) REPLICATIONSAVERAGE of (3) REPLICATIONS $804$804
10 StineStine 19RA02 (2) §19RA02 (2) § RR2YRR2Y 1 91.9 RR CMBCMB 53 453.4 Yield 7 77.7 Moisture 00 Lodgingg Seed g 123 9123.9 Stand SCN $801$801 Gross
11 AsgrowAsgrow Company/Brandpy Product/Brand† AG1832 §AG1832 § Technol.† RR2YRR2Y Mat. 1 81.8 MRMR Ac PVAc,PV 52 952.9 Bu/A 7 77.7 % 00 % Trmt.† 122 0122.0 (x 1000)( Resist. ) $794$794 Income
12 Prairie Brandiid KrugerKruger K2-1901K2 1901 PB-1566R2662 RR2YRR2Y RR2Y2 1.91.9 1.5 R CMB 52.8 56.456.4 7.7 7.67.6 0 00 Ac,PVAc,PV 122.9 126.3126.3 RR $792$ $846$846
13 Channel StineStine 19RA02 §19RA02 § 1901R2 RR2YRR2Y RR2Y 1 91.9 1.9 R Ac,PV, 52.8 55.355.3 7.6 7 67.6 0 00 CMBCMB 123.4 120 0120.0 RR $791$ $830$830
14 Titan ProTitan Pro WensmanWensman W 3190NR2W 3190NR2 20M120M1 RR2YRR2Y 1 91.9 2.02.0 RR AmAm 52.552.5 54 554.5 7.57.5 7 67.6 00 00 AcAc 124.4124.4 119 5119.5 RR $788$788 $818$818
15 KrugerKruger H ftHefty H17Y12H17Y12 K2-2002K2-2002 RR2YRR2Y 1 71.7 2 02.0 RR Ac PVAc,PV 52 452.4 53 753.7 7 97.9 7 77.7 00 00 II 125 4125.4 124 4124.4 MRMR $786$786 $806$806
16 ChannelChannel Dyna-Gro S15RY53 1700R21700R2 RR2Y RR2YRR2Y 1.5 1 71.7 RR Ac PVAc,PV 52 352.3 53.6 7 97.9 7.7 00 0 Ac 123 9123.9 126.8 R $784$784 $804
17 H ftHefty LG SeedsLG Seeds C2050R2C2050R2 H16Y11H16Y11 RR2YRR2Y 2.12.1 1 61.6 MRMR II 51 451.4 53.653.6 7 67.6 7.77.7 00 00 AcAc 123 9123.9 123.9123.9 RR $771$771 $804$804
18 Anderson Titan ProTitan Pro 19M4219M42 162R2Y RR2YRR2Y RR2Y 1.91.9 1.6 R None 51.3 53.653.6 7.5 7.77.7 0 00 CMBCMB 119.5 121.0121.0 RR $770 $804$804
19 Titan ProTitan Pro StineStine 19RA02 (2) §19RA02 (2) § 15M2215M22 RR2YRR2Y 1 91.9 1.51.5 RR CMBCMB 51.351.3 53 453.4 7.87.8 7 77.7 00 00 CMBCMB 125.4125.4 123 9123.9 RR $769$769 $801$801
20 DairylandDairyland AsgrowAsgrow AG1832 §AG1832 § DSR-1710R2YDSR-1710R2Y RR2YRR2Y 1 81.8 1 71.7 RR CMBCMB 51 351.3 52 952.9 7 77.7 7 77.7 00 00 Ac PVAc,PV 122 0122.0 122 0122.0 MRMR $769$769 $794$794
21 HeftyHefty Prairie Brandiid PB-1566R2662 H20R3H20R3 RR2Y2 RR2YRR2Y 1.5 2 02.0 MRMR II 50 550.5 52.8 8 28.2 7.7 00 0 CMB 121 0121.0 122.9 R $757$757 $792$
22 PPrairie BrandiiBd Channel 1901R2 PB 1743R2PB-1743R2 RR2Y RR2YRR2Y 1.9 1 71.7 RR CMBCMB 50 250.2 52.8 7 77.7 7.6 00 0 Ac,PV, 125 8125.8 123.4 R $752$752 $791$
23 Gold Country Titan ProTitan Pro 20M120M1 1741 RR2YRR2Y RR2Y 2.02.0 1.7 R Ac 50.1 52.552.5 7.8 7.57.5 0 00 AmAm 123.9 124.4124.4 RR $751 $788$788
24 Trelaye ay KrugerKruger K2-2002K2-2002 20RR4303 RR2YRR2Y RR2Y 2 02.0 2.00 R Ac,Exc, 49.99 9 52 452.4 7.66 7 97.9 00 00 Ac PVAc,PV 127.88 125 4125.4 RR $749$9 $786$786
25 HeftyHefty ChannelChannel 1700R21700R2 H14R3H14R3 RR2YRR2Y 1 71.7 1.41.4 MRMR II 49.749.7 52 352.3 7.77.7 7 97.9 00 00 Ac PVAc,PV 122.9122.9 123 9123.9 RR $746$746 $784$784
26 Prairie BrandPrairie Brand H ftHefty H16Y11H16Y11 PB-2099NRR2PB-2099NRR2 RR2YRR2Y 1 61.6 2 02.0 RR CMBCMB 49 649.6 51 451.4 7 87.8 7 67.6 00 00 II 126 3126.3 123 9123.9 MRMR $743$743 $771$771
27 WensmanWensman Anderson 162R2Y W 3174NR2W 3174NR2 RR2Y RR2YRR2Y 1.6 1 71.7 RR AcAc 49 349.3 51.3 7 67.6 7.5 00 0 None 122 5122.5 119.5 R $740$740 $770
28 KKruger Titan ProTitan Pro 15M2215M22 K2 1602K2-1602 RR2YRR2Y 1.51.5 1 61.6 R Ac,PV 48.78 51.351.3 7.66 7.87.8 00 00 CMBCMB 125.412 125.4125.4 RR $731$31 $769$769
29 NK Brand DairylandDairyland DSR-1710R2YDSR-1710R2Y S18-C2 §§ RR2YRR2Y RR2Y 1 71.7 1.8 R CMB 48.7 51 351.3 7.7 7 77.7 0 00 CMBCMB 126.8 122 0122.0 RR $731$ $769$769
30 KrugerKruger HeftyHefty H20R3H20R3 K2-1902K2 1902 RR2YRR2Y 2 02.0 1.91.9 RR Ac,PVAc,PV 48.748.7 50 550.5 7.57.5 8 28.2 00 00 II 124.4124.4 121 0121.0 MRMR $730$730 $757$757
31 Prairie BrandPrairie Brand PPrairie BrandiiBd PB 1743R2PB-1743R2 PB-1823R2PB-1823R2 RR2YRR2Y 1 71.7 1 81.8 RR NoneNone 48 548.5 50 250.2 7 67.6 7 77.7 00 00 CMBCMB 121 0121.0 125 8125.8 RR $727$727 $752$752
32 Gold CountryGold Country Gold Country 1741 15411541 RR2Y RR2YRR2Y 1.7 1 51.5 RR AcAc 48 448.4 50.1 7 67.6 7.8 00 0 Ac 110 4110.4 123.9 R $726$726 $751
33 Trelaye ay 20RR4303 RR2Y 2.00 Test Average = 47 647.6 49.99 9 7 77.7 7.66 00 00 Ac,Exc, 122 9122.9 127.88 R $713$713 $749$9
34 HeftyHefty H14R3H14R3 RR2YRR2Y 1.41.4 LSD (0.10) = 5.7 49.749.7 0.3 7.77.7 ns 00 II 37.8 122.9122.9 MRMR 566.4 $746$746
35 Prairie BrandPrairie Brand PB-2099NRR2PB-2099NRR2 F.I.R.S.T. Managerg RR2YRR2Y 2 02.0 C.V. = 8.8 49 649.6 2.9 7 87.8 00 CMBCMB 56.4 126 3126.3 RR 846.2 $743$743
WensmanWensman W 3174NR2W 3174NR2 RR2YRR2Y 1 71.7 49 349.3 7 67.6 00 AcAc 122 5122.5 RR $740$740
KKruger K2 1602K2-1602 RR2YRR2Y 1 61.6 48.78 7.66 00 Ac,PV 125.412 R $731$31
NK Brand S18-C2 §§ RR2Y 1.8 48.7 7.7 0 CMB 126.8 R $731$
KrugerKruger K2-1902K2 1902 RR2YRR2Y 1.91.9 48.748.7 7.57.5 00 Ac,PVAc,PV 124.4124.4 RR $730$730
Prairie BrandPrairie Brand PB-1823R2PB-1823R2 RR2YRR2Y 1 81.8 48 548.5 7 67.6 00 NoneNone 121 0121.0 RR $727$727
Gold CountryGold Country 15411541 RR2YRR2Y 1 51.5 48 448.4 7 67.6 00 AcAc 110 4110.4 RR $726$726
47 647.6 7 77.7 00 Test Average = 122 9122.9 $713$713
5.7 0.3 ns LSD (0.10) = 37.8 566.4

View File

@ -1,39 +0,0 @@
"TILLAGE/CULTIVATION:TILLAGE/CULTIVATION:","","conventional w/ fall tillconventional w/ fall till","","","","","","","","","","",""
"PEST MANAGEMENT:PEST MANAGEMENT:","","Roundup twiceRoundup twice","","","","","","","","","","",""
"SEEDED - RATE:","","May 15M15","140,000 /A140 000 /A","","","","","","","TOP 30 foTOP 30 for YIELD of 63 TESTED","","YIELD of 63 TESTED",""
"HARVESTEDHARVESTED - STAND:STAND","","O t 3Oct 3","122 921 /A122,921 /A","","","","","","","","AVERAGE of (3) REPLICATIONSAVERAGE of (3) REPLICATIONS","",""
"","","","","","SCN","Seed","Yield","Moisture","Lodgingg","g","Stand","","Gross"
"Company/Brandpy","","Product/Brand†","Technol.†","Mat.","Resist.","Trmt.†","Bu/A","%","%","","(x 1000)(",")","Income"
"KrugerKruger","","K2-1901K2 1901","RR2YRR2Y","1.91.9","RR","Ac,PVAc,PV","56.456.4","7.67.6","00","","126.3126.3","","$846$846"
"StineStine","","19RA02 §19RA02 §","RR2YRR2Y","1 91.9","RR","CMBCMB","55.355.3","7 67.6","00","","120 0120.0","","$830$830"
"WensmanWensman","","W 3190NR2W 3190NR2","RR2YRR2Y","1 91.9","RR","AcAc","54 554.5","7 67.6","00","","119 5119.5","","$818$818"
"H ftHefty","","H17Y12H17Y12","RR2YRR2Y","1 71.7","MRMR","II","53 753.7","7 77.7","00","","124 4124.4","","$806$806"
"Dyna-Gro","","S15RY53","RR2Y","1.5","R","Ac","53.6","7.7","0","","126.8","","$804"
"LG SeedsLG Seeds","","C2050R2C2050R2","RR2YRR2Y","2.12.1","RR","AcAc","53.653.6","7.77.7","00","","123.9123.9","","$804$804"
"Titan ProTitan Pro","","19M4219M42","RR2YRR2Y","1.91.9","RR","CMBCMB","53.653.6","7.77.7","00","","121.0121.0","","$804$804"
"StineStine","","19RA02 (2) §19RA02 (2) §","RR2YRR2Y","1 91.9","RR","CMBCMB","53 453.4","7 77.7","00","","123 9123.9","","$801$801"
"AsgrowAsgrow","","AG1832 §AG1832 §","RR2YRR2Y","1 81.8","MRMR","Ac PVAc,PV","52 952.9","7 77.7","00","","122 0122.0","","$794$794"
"Prairie Brandiid","","PB-1566R2662","RR2Y2","1.5","R","CMB","52.8","7.7","0","","122.9","","$792$"
"Channel","","1901R2","RR2Y","1.9","R","Ac,PV,","52.8","7.6","0","","123.4","","$791$"
"Titan ProTitan Pro","","20M120M1","RR2YRR2Y","2.02.0","RR","AmAm","52.552.5","7.57.5","00","","124.4124.4","","$788$788"
"KrugerKruger","","K2-2002K2-2002","RR2YRR2Y","2 02.0","RR","Ac PVAc,PV","52 452.4","7 97.9","00","","125 4125.4","","$786$786"
"ChannelChannel","","1700R21700R2","RR2YRR2Y","1 71.7","RR","Ac PVAc,PV","52 352.3","7 97.9","00","","123 9123.9","","$784$784"
"H ftHefty","","H16Y11H16Y11","RR2YRR2Y","1 61.6","MRMR","II","51 451.4","7 67.6","00","","123 9123.9","","$771$771"
"Anderson","","162R2Y","RR2Y","1.6","R","None","51.3","7.5","0","","119.5","","$770"
"Titan ProTitan Pro","","15M2215M22","RR2YRR2Y","1.51.5","RR","CMBCMB","51.351.3","7.87.8","00","","125.4125.4","","$769$769"
"DairylandDairyland","","DSR-1710R2YDSR-1710R2Y","RR2YRR2Y","1 71.7","RR","CMBCMB","51 351.3","7 77.7","00","","122 0122.0","","$769$769"
"HeftyHefty","","H20R3H20R3","RR2YRR2Y","2 02.0","MRMR","II","50 550.5","8 28.2","00","","121 0121.0","","$757$757"
"PPrairie BrandiiBd","","PB 1743R2PB-1743R2","RR2YRR2Y","1 71.7","RR","CMBCMB","50 250.2","7 77.7","00","","125 8125.8","","$752$752"
"Gold Country","","1741","RR2Y","1.7","R","Ac","50.1","7.8","0","","123.9","","$751"
"Trelaye ay","","20RR4303","RR2Y","2.00","R","Ac,Exc,","49.99 9","7.66","00","","127.88","","$749$9"
"HeftyHefty","","H14R3H14R3","RR2YRR2Y","1.41.4","MRMR","II","49.749.7","7.77.7","00","","122.9122.9","","$746$746"
"Prairie BrandPrairie Brand","","PB-2099NRR2PB-2099NRR2","RR2YRR2Y","2 02.0","RR","CMBCMB","49 649.6","7 87.8","00","","126 3126.3","","$743$743"
"WensmanWensman","","W 3174NR2W 3174NR2","RR2YRR2Y","1 71.7","RR","AcAc","49 349.3","7 67.6","00","","122 5122.5","","$740$740"
"KKruger","","K2 1602K2-1602","RR2YRR2Y","1 61.6","R","Ac,PV","48.78","7.66","00","","125.412","","$731$31"
"NK Brand","","S18-C2 §§","RR2Y","1.8","R","CMB","48.7","7.7","0","","126.8","","$731$"
"KrugerKruger","","K2-1902K2 1902","RR2YRR2Y","1.91.9","RR","Ac,PVAc,PV","48.748.7","7.57.5","00","","124.4124.4","","$730$730"
"Prairie BrandPrairie Brand","","PB-1823R2PB-1823R2","RR2YRR2Y","1 81.8","RR","NoneNone","48 548.5","7 67.6","00","","121 0121.0","","$727$727"
"Gold CountryGold Country","","15411541","RR2YRR2Y","1 51.5","RR","AcAc","48 448.4","7 67.6","00","","110 4110.4","","$726$726"
"","","","","","","Test Average =","47 647.6","7 77.7","00","","122 9122.9","","$713$713"
"","","","","","","LSD (0.10) =","5.7","0.3","ns","","37.8","","566.4"
"","F.I.R.S.T. Managerg","","","","","C.V. =","8.8","2.9","","","56.4","","846.2"
1 TILLAGE/CULTIVATION:TILLAGE/CULTIVATION: conventional w/ fall tillconventional w/ fall till
2 PEST MANAGEMENT:PEST MANAGEMENT: Roundup twiceRoundup twice
3 SEEDED - RATE: May 15M15 140,000 /A140 000 /A TOP 30 foTOP 30 for YIELD of 63 TESTED YIELD of 63 TESTED
4 HARVESTEDHARVESTED - STAND:STAND O t 3Oct 3 122 921 /A122,921 /A AVERAGE of (3) REPLICATIONSAVERAGE of (3) REPLICATIONS
5 SCN Seed Yield Moisture Lodgingg g Stand Gross
6 Company/Brandpy Product/Brand† Technol.† Mat. Resist. Trmt.† Bu/A % % (x 1000)( ) Income
7 KrugerKruger K2-1901K2 1901 RR2YRR2Y 1.91.9 RR Ac,PVAc,PV 56.456.4 7.67.6 00 126.3126.3 $846$846
8 StineStine 19RA02 §19RA02 § RR2YRR2Y 1 91.9 RR CMBCMB 55.355.3 7 67.6 00 120 0120.0 $830$830
9 WensmanWensman W 3190NR2W 3190NR2 RR2YRR2Y 1 91.9 RR AcAc 54 554.5 7 67.6 00 119 5119.5 $818$818
10 H ftHefty H17Y12H17Y12 RR2YRR2Y 1 71.7 MRMR II 53 753.7 7 77.7 00 124 4124.4 $806$806
11 Dyna-Gro S15RY53 RR2Y 1.5 R Ac 53.6 7.7 0 126.8 $804
12 LG SeedsLG Seeds C2050R2C2050R2 RR2YRR2Y 2.12.1 RR AcAc 53.653.6 7.77.7 00 123.9123.9 $804$804
13 Titan ProTitan Pro 19M4219M42 RR2YRR2Y 1.91.9 RR CMBCMB 53.653.6 7.77.7 00 121.0121.0 $804$804
14 StineStine 19RA02 (2) §19RA02 (2) § RR2YRR2Y 1 91.9 RR CMBCMB 53 453.4 7 77.7 00 123 9123.9 $801$801
15 AsgrowAsgrow AG1832 §AG1832 § RR2YRR2Y 1 81.8 MRMR Ac PVAc,PV 52 952.9 7 77.7 00 122 0122.0 $794$794
16 Prairie Brandiid PB-1566R2662 RR2Y2 1.5 R CMB 52.8 7.7 0 122.9 $792$
17 Channel 1901R2 RR2Y 1.9 R Ac,PV, 52.8 7.6 0 123.4 $791$
18 Titan ProTitan Pro 20M120M1 RR2YRR2Y 2.02.0 RR AmAm 52.552.5 7.57.5 00 124.4124.4 $788$788
19 KrugerKruger K2-2002K2-2002 RR2YRR2Y 2 02.0 RR Ac PVAc,PV 52 452.4 7 97.9 00 125 4125.4 $786$786
20 ChannelChannel 1700R21700R2 RR2YRR2Y 1 71.7 RR Ac PVAc,PV 52 352.3 7 97.9 00 123 9123.9 $784$784
21 H ftHefty H16Y11H16Y11 RR2YRR2Y 1 61.6 MRMR II 51 451.4 7 67.6 00 123 9123.9 $771$771
22 Anderson 162R2Y RR2Y 1.6 R None 51.3 7.5 0 119.5 $770
23 Titan ProTitan Pro 15M2215M22 RR2YRR2Y 1.51.5 RR CMBCMB 51.351.3 7.87.8 00 125.4125.4 $769$769
24 DairylandDairyland DSR-1710R2YDSR-1710R2Y RR2YRR2Y 1 71.7 RR CMBCMB 51 351.3 7 77.7 00 122 0122.0 $769$769
25 HeftyHefty H20R3H20R3 RR2YRR2Y 2 02.0 MRMR II 50 550.5 8 28.2 00 121 0121.0 $757$757
26 PPrairie BrandiiBd PB 1743R2PB-1743R2 RR2YRR2Y 1 71.7 RR CMBCMB 50 250.2 7 77.7 00 125 8125.8 $752$752
27 Gold Country 1741 RR2Y 1.7 R Ac 50.1 7.8 0 123.9 $751
28 Trelaye ay 20RR4303 RR2Y 2.00 R Ac,Exc, 49.99 9 7.66 00 127.88 $749$9
29 HeftyHefty H14R3H14R3 RR2YRR2Y 1.41.4 MRMR II 49.749.7 7.77.7 00 122.9122.9 $746$746
30 Prairie BrandPrairie Brand PB-2099NRR2PB-2099NRR2 RR2YRR2Y 2 02.0 RR CMBCMB 49 649.6 7 87.8 00 126 3126.3 $743$743
31 WensmanWensman W 3174NR2W 3174NR2 RR2YRR2Y 1 71.7 RR AcAc 49 349.3 7 67.6 00 122 5122.5 $740$740
32 KKruger K2 1602K2-1602 RR2YRR2Y 1 61.6 R Ac,PV 48.78 7.66 00 125.412 $731$31
33 NK Brand S18-C2 §§ RR2Y 1.8 R CMB 48.7 7.7 0 126.8 $731$
34 KrugerKruger K2-1902K2 1902 RR2YRR2Y 1.91.9 RR Ac,PVAc,PV 48.748.7 7.57.5 00 124.4124.4 $730$730
35 Prairie BrandPrairie Brand PB-1823R2PB-1823R2 RR2YRR2Y 1 81.8 RR NoneNone 48 548.5 7 67.6 00 121 0121.0 $727$727
36 Gold CountryGold Country 15411541 RR2YRR2Y 1 51.5 RR AcAc 48 448.4 7 67.6 00 110 4110.4 $726$726
37 Test Average = 47 647.6 7 77.7 00 122 9122.9 $713$713
38 LSD (0.10) = 5.7 0.3 ns 37.8 566.4
39 F.I.R.S.T. Managerg C.V. = 8.8 2.9 56.4 846.2

View File

@ -1,66 +0,0 @@
"0","1","2","3","4"
"","DLHS-4 (2012-13)","","DLHS-3 (2007-08)",""
"Indicators","TOTAL","RURAL","TOTAL","RURAL"
"Child feeding practices (based on last-born child in the reference period) (%)","","","",""
"Children age 0-5 months exclusively breastfed9 .......................................................................... 76.9 80.0
Children age 6-9 months receiving solid/semi-solid food and breast milk .................................... 78.6 75.0
Children age 12-23 months receiving breast feeding along with complementary feeding ........... 31.8 24.2
Children age 6-35 months exclusively breastfed for at least 6 months ........................................ 4.7 3.4
Children under 3 years breastfed within one hour of birth ............................................................ 42.9 46.5","","","NA","NA"
"","","","85.9","89.3"
"","","","NA","NA"
"","","","30.0","27.7"
"","","","50.6","52.9"
"Birth Weight (%) (age below 36 months)","","","",""
"Percentage of Children weighed at birth ...................................................................................... 38.8 41.0 NA NA
Percentage of Children with low birth weight (out of those who weighted) ( below 2.5 kg) ......... 12.8 14.6 NA NA","","","",""
"Awareness about Diarrhoea (%)","","","",""
"Women know about what to do when a child gets diarrhoea ..................................................... 96.3 96.2","","","94.4","94.2"
"Awareness about ARI (%)","","","",""
"Women aware about danger signs of ARI10 ................................................................................. 55.9 59.7","","","32.8","34.7"
"Treatment of childhood diseases (based on last two surviving children born during the","","","",""
"","","","",""
"reference period) (%)","","","",""
"","","","",""
"Prevalence of diarrhoea in last 2 weeks for under 5 years old children ....................................... 1.6 1.3 6.5 7.0
Children with diarrhoea in the last 2 weeks and received ORS11 ................................................. 100.0 100.0 54.8 53.3
Children with diarrhoea in the last 2 weeks and sought advice/treatment ................................... 100.0 50.0 72.9 73.3
Prevalence of ARI in last 2 weeks for under 5 years old children ............................................ 4.3 3.9 3.9 4.2
Children with acute respiratory infection or fever in last 2 weeks and sought advice/treatment 37.5 33.3 69.8 68.0
Children with diarrhoea in the last 2 weeks given Zinc along with ORS ...................................... 66.6 50.0 NA NA","","","6.5","7.0"
"","","","54.8","53.3"
"","","","72.9","73.3"
"","","","3.9","4.2"
"","","","69.8","68.0"
"Awareness of RTI/STI and HIV/AIDS (%)","","","",""
"Women who have heard of RTI/STI ............................................................................................. 55.8 57.1
Women who have heard of HIV/AIDS .......................................................................................... 98.9 99.0
Women who have any symptoms of RTI/STI .............................................................................. 13.9 13.5
Women who know the place to go for testing of HIV/AIDS12 ....................................................... 59.9 57.1
Women underwent test for detecting HIV/AIDS12 ........................................................................ 37.3 36.8","","","34.8","38.2"
"","","","98.3","98.1"
"","","","15.6","16.1"
"","","","48.6","46.3"
"","","","14.1","12.3"
"Utilization of Government Health Services (%)","","","",""
"Antenatal care .............................................................................................................................. 69.7 66.7 79.0 81.0
Treatment for pregnancy complications ....................................................................................... 57.1 59.3 88.0 87.8
Treatment for post-delivery complications ................................................................................... 33.3 33.3 68.4 68.4
Treatment for vaginal discharge ................................................................................................... 20.0 25.0 73.9 71.4
Treatment for children with diarrhoea13 ........................................................................................ 50.0 100.0 NA NA
Treatment for children with ARI13 ................................................................................................. NA NA NA NA","","","79.0","81.0"
"","","","88.0","87.8"
"","","","68.4","68.4"
"","","","73.9","71.4"
"Birth Registration (%)","","","",""
"Children below age 5 years having birth registration done .......................................................... 40.6 44.3 NA NA
Children below age 5 years who received birth certificate (out of those registered) .................... 65.9 63.6 NA NA","","","",""
"Personal Habits (age 15 years and above) (%)","","","",""
"Men who use any kind of smokeless tobacco ............................................................................. 74.6 74.2 NA NA
Women who use any kind of smokeless tobacco ........................................................................ 59.5 58.9 NA NA
Men who smoke ........................................................................................................................... 56.0 56.4 NA NA
Women who smoke ...................................................................................................................... 18.4 18.0 NA NA
Men who consume alcohol ........................................................................................................... 58.4 58.2 NA NA
Women who consume alcohol ..................................................................................................... 10.9 9.3 NA NA","","","",""
"9 Children Who were given nothing but breast milk till the survey date 10Acute Respiratory Infections11Oral Rehydration Solutions/Salts.12Based on","","","",""
"the women who have heard of HIV/AIDS.13 Last two weeks","","","",""
1 0 1 2 3 4
2 DLHS-4 (2012-13) DLHS-3 (2007-08)
3 Indicators TOTAL RURAL TOTAL RURAL
4 Child feeding practices (based on last-born child in the reference period) (%)
5 Children age 0-5 months exclusively breastfed9 .......................................................................... 76.9 80.0 Children age 6-9 months receiving solid/semi-solid food and breast milk .................................... 78.6 75.0 Children age 12-23 months receiving breast feeding along with complementary feeding ........... 31.8 24.2 Children age 6-35 months exclusively breastfed for at least 6 months ........................................ 4.7 3.4 Children under 3 years breastfed within one hour of birth ............................................................ 42.9 46.5 NA NA
6 85.9 89.3
7 NA NA
8 30.0 27.7
9 50.6 52.9
10 Birth Weight (%) (age below 36 months)
11 Percentage of Children weighed at birth ...................................................................................... 38.8 41.0 NA NA Percentage of Children with low birth weight (out of those who weighted) ( below 2.5 kg) ......... 12.8 14.6 NA NA
12 Awareness about Diarrhoea (%)
13 Women know about what to do when a child gets diarrhoea ..................................................... 96.3 96.2 94.4 94.2
14 Awareness about ARI (%)
15 Women aware about danger signs of ARI10 ................................................................................. 55.9 59.7 32.8 34.7
16 Treatment of childhood diseases (based on last two surviving children born during the
17
18 reference period) (%)
19
20 Prevalence of diarrhoea in last 2 weeks for under 5 years old children ....................................... 1.6 1.3 6.5 7.0 Children with diarrhoea in the last 2 weeks and received ORS11 ................................................. 100.0 100.0 54.8 53.3 Children with diarrhoea in the last 2 weeks and sought advice/treatment ................................... 100.0 50.0 72.9 73.3 Prevalence of ARI in last 2 weeks for under 5 years old children ............................................ 4.3 3.9 3.9 4.2 Children with acute respiratory infection or fever in last 2 weeks and sought advice/treatment 37.5 33.3 69.8 68.0 Children with diarrhoea in the last 2 weeks given Zinc along with ORS ...................................... 66.6 50.0 NA NA 6.5 7.0
21 54.8 53.3
22 72.9 73.3
23 3.9 4.2
24 69.8 68.0
25 Awareness of RTI/STI and HIV/AIDS (%)
26 Women who have heard of RTI/STI ............................................................................................. 55.8 57.1 Women who have heard of HIV/AIDS .......................................................................................... 98.9 99.0 Women who have any symptoms of RTI/STI .............................................................................. 13.9 13.5 Women who know the place to go for testing of HIV/AIDS12 ....................................................... 59.9 57.1 Women underwent test for detecting HIV/AIDS12 ........................................................................ 37.3 36.8 34.8 38.2
27 98.3 98.1
28 15.6 16.1
29 48.6 46.3
30 14.1 12.3
31 Utilization of Government Health Services (%)
32 Antenatal care .............................................................................................................................. 69.7 66.7 79.0 81.0 Treatment for pregnancy complications ....................................................................................... 57.1 59.3 88.0 87.8 Treatment for post-delivery complications ................................................................................... 33.3 33.3 68.4 68.4 Treatment for vaginal discharge ................................................................................................... 20.0 25.0 73.9 71.4 Treatment for children with diarrhoea13 ........................................................................................ 50.0 100.0 NA NA Treatment for children with ARI13 ................................................................................................. NA NA NA NA 79.0 81.0
33 88.0 87.8
34 68.4 68.4
35 73.9 71.4
36 Birth Registration (%)
37 Children below age 5 years having birth registration done .......................................................... 40.6 44.3 NA NA Children below age 5 years who received birth certificate (out of those registered) .................... 65.9 63.6 NA NA
38 Personal Habits (age 15 years and above) (%)
39 Men who use any kind of smokeless tobacco ............................................................................. 74.6 74.2 NA NA Women who use any kind of smokeless tobacco ........................................................................ 59.5 58.9 NA NA Men who smoke ........................................................................................................................... 56.0 56.4 NA NA Women who smoke ...................................................................................................................... 18.4 18.0 NA NA Men who consume alcohol ........................................................................................................... 58.4 58.2 NA NA Women who consume alcohol ..................................................................................................... 10.9 9.3 NA NA
40 9 Children Who were given nothing but breast milk till the survey date 10Acute Respiratory Infections11Oral Rehydration Solutions/Salts.12Based on
41 the women who have heard of HIV/AIDS.13 Last two weeks

View File

@ -1,44 +0,0 @@
"0","1","2","3","4","5","6","7","8","9","10","11","12","13","14","15","16","17","18","19","20","21","22","23"
"","Table: 5 Public Health Outlay 2012-13 (Budget Estimates) (Rs. in 000)","","","","","","","","","","","","","","","","","","","","","",""
"","States-A","","","Revenue","","","","","","Capital","","","","","","Total","","","Others(1)","","","Total",""
"","","","","","","","","","","","","","","","","Revenue &","","","","","","",""
"","","","Medical & Family Medical & Family
Public Welfare Public Welfare
Health Health","","","","","","","","","","","","","","","","","","","",""
"","","","","","","","","","","","","","","","","Capital","","","","","","",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"","Andhra Pradesh","","","47,824,589","","","9,967,837","","","1,275,000","","","15,000","","","59,082,426","","","14,898,243","","","73,980,669",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Arunachal Pradesh 2,241,609 107,549 23,000 0 2,372,158 86,336 2,458,494","","","","","","","","","","","","","","","","","","","","","","",""
"","Assam","","","14,874,821","","","2,554,197","","","161,600","","","0","","","17,590,618","","","4,408,505","","","21,999,123",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Bihar 21,016,708 4,332,141 5,329,000 0 30,677,849 2,251,571 32,929,420","","","","","","","","","","","","","","","","","","","","","","",""
"","Chhattisgarh","","","11,427,311","","","1,415,660","","","2,366,592","","","0","","","15,209,563","","","311,163","","","15,520,726",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Delhi 28,084,780 411,700 4,550,000 0 33,046,480 5,000 33,051,480","","","","","","","","","","","","","","","","","","","","","","",""
"","Goa","","","4,055,567","","","110,000","","","330,053","","","0","","","4,495,620","","","12,560","","","4,508,180",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Gujarat 26,328,400 6,922,900 12,664,000 42,000 45,957,300 455,860 46,413,160","","","","","","","","","","","","","","","","","","","","","","",""
"","Haryana","","","15,156,681","","","1,333,527","","","40,100","","","0","","","16,530,308","","","1,222,698","","","17,753,006",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Himachal Pradesh 8,647,229 1,331,529 580,800 0 10,559,558 725,315 11,284,873","","","","","","","","","","","","","","","","","","","","","","",""
"","Jammu & Kashmir","","","14,411,984","","","270,840","","","3,188,550","","","0","","","17,871,374","","","166,229","","","18,037,603",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Jharkhand 8,185,079 3,008,077 3,525,558 0 14,718,714 745,139 15,463,853","","","","","","","","","","","","","","","","","","","","","","",""
"","Karnataka","","","34,939,843","","","4,317,801","","","3,669,700","","","0","","","42,927,344","","","631,088","","","43,558,432",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Kerala 27,923,965 3,985,473 929,503 0 32,838,941 334,640 33,173,581","","","","","","","","","","","","","","","","","","","","","","",""
"","Madhya Pradesh","","","28,459,540","","","4,072,016","","","3,432,711","","","0","","","35,964,267","","","472,139","","","36,436,406",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Maharashtra 55,011,100 6,680,721 5,038,576 0 66,730,397 313,762 67,044,159","","","","","","","","","","","","","","","","","","","","","","",""
"","Manipur","","","2,494,600","","","187,700","","","897,400","","","0","","","3,579,700","","","0","","","3,579,700",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Meghalaya 2,894,093 342,893 705,500 5,000 3,947,486 24,128 3,971,614","","","","","","","","","","","","","","","","","","","","","","",""
"","Mizoram","","","1,743,501","","","84,185","","","10,250","","","0","","","1,837,936","","","17,060","","","1,854,996",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Nagaland 2,368,724 204,329 226,400 0 2,799,453 783,054 3,582,507","","","","","","","","","","","","","","","","","","","","","","",""
"","Odisha","","","14,317,179","","","2,552,292","","","1,107,250","","","0","","","17,976,721","","","451,438","","","18,428,159",""
"","","","","","","","","","","","","","","","","","","","","","","",""
"Puducherry 4,191,757 52,249 192,400 0 4,436,406 2,173 4,438,579","","","","","","","","","","","","","","","","","","","","","","",""
"","Punjab","","","19,775,485","","","2,208,343","","","2,470,882","","","0","","","24,454,710","","","1,436,522","","","25,891,232",""
"","","","","","","","","","","","","","","","","","","","","","","",""
1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
2 Table: 5 Public Health Outlay 2012-13 (Budget Estimates) (Rs. in 000)
3 States-A Revenue Capital Total Others(1) Total
4 Revenue &
5 Medical & Family Medical & Family Public Welfare Public Welfare Health Health
6 Capital
7
8 Andhra Pradesh 47,824,589 9,967,837 1,275,000 15,000 59,082,426 14,898,243 73,980,669
9
10 Arunachal Pradesh 2,241,609 107,549 23,000 0 2,372,158 86,336 2,458,494
11 Assam 14,874,821 2,554,197 161,600 0 17,590,618 4,408,505 21,999,123
12
13 Bihar 21,016,708 4,332,141 5,329,000 0 30,677,849 2,251,571 32,929,420
14 Chhattisgarh 11,427,311 1,415,660 2,366,592 0 15,209,563 311,163 15,520,726
15
16 Delhi 28,084,780 411,700 4,550,000 0 33,046,480 5,000 33,051,480
17 Goa 4,055,567 110,000 330,053 0 4,495,620 12,560 4,508,180
18
19 Gujarat 26,328,400 6,922,900 12,664,000 42,000 45,957,300 455,860 46,413,160
20 Haryana 15,156,681 1,333,527 40,100 0 16,530,308 1,222,698 17,753,006
21
22 Himachal Pradesh 8,647,229 1,331,529 580,800 0 10,559,558 725,315 11,284,873
23 Jammu & Kashmir 14,411,984 270,840 3,188,550 0 17,871,374 166,229 18,037,603
24
25 Jharkhand 8,185,079 3,008,077 3,525,558 0 14,718,714 745,139 15,463,853
26 Karnataka 34,939,843 4,317,801 3,669,700 0 42,927,344 631,088 43,558,432
27
28 Kerala 27,923,965 3,985,473 929,503 0 32,838,941 334,640 33,173,581
29 Madhya Pradesh 28,459,540 4,072,016 3,432,711 0 35,964,267 472,139 36,436,406
30
31 Maharashtra 55,011,100 6,680,721 5,038,576 0 66,730,397 313,762 67,044,159
32 Manipur 2,494,600 187,700 897,400 0 3,579,700 0 3,579,700
33
34 Meghalaya 2,894,093 342,893 705,500 5,000 3,947,486 24,128 3,971,614
35 Mizoram 1,743,501 84,185 10,250 0 1,837,936 17,060 1,854,996
36
37 Nagaland 2,368,724 204,329 226,400 0 2,799,453 783,054 3,582,507
38 Odisha 14,317,179 2,552,292 1,107,250 0 17,976,721 451,438 18,428,159
39
40 Puducherry 4,191,757 52,249 192,400 0 4,436,406 2,173 4,438,579
41 Punjab 19,775,485 2,208,343 2,470,882 0 24,454,710 1,436,522 25,891,232
42

View File

@ -1,71 +0,0 @@
"0","1","2","3","4"
"","DLHS-4 (2012-13)","","DLHS-3 (2007-08)",""
"Indicators","TOTAL","RURAL","TOTAL","RURAL"
"Reported Prevalence of Morbidity","","","",""
"Any Injury ..................................................................................................................................... 1.9 2.1
Acute Illness ................................................................................................................................. 4.5 5.6
Chronic Illness .............................................................................................................................. 5.1 4.1","","","",""
"","","","",""
"","","","",""
"Reported Prevalence of Chronic Illness during last one year (%)","","","",""
"Disease of respiratory system ...................................................................................................... 11.7 15.0
Disease of cardiovascular system ................................................................................................ 8.9 9.3
Persons suffering from tuberculosis ............................................................................................. 2.2 1.5","","","",""
"","","","",""
"","","","",""
"Anaemia Status by Haemoglobin Level14 (%)","","","",""
"Children (6-59 months) having anaemia ...................................................................................... 68.5 71.9
Children (6-59 months) having severe anaemia .......................................................................... 6.7 9.4
Children (6-9 Years) having anaemia - Male ................................................................................ 67.1 71.4
Children (6-9 Years) having severe anaemia - Male .................................................................... 4.4 2.4
Children (6-9 Years) having anaemia - Female ........................................................................... 52.4 48.8
Children (6-9 Years) having severe anaemia - Female ................................................................ 1.2 0.0
Children (6-14 years) having anaemia - Male ............................................................................. 50.8 62.5
Children (6-14 years) having severe anaemia - Male .................................................................. 3.7 3.6
Children (6-14 years) having anaemia - Female ......................................................................... 48.3 50.0
Children (6-14 years) having severe anaemia - Female .............................................................. 4.3 6.1
Children (10-19 Years15) having anaemia - Male ......................................................................... 37.9 51.2
Children (10-19 Years15) having severe anaemia - Male ............................................................. 3.5 4.0
Children (10-19 Years15) having anaemia - Female ..................................................................... 46.6 52.1
Children (10-19 Years15) having severe anaemia - Female ......................................................... 6.4 6.5
Adolescents (15-19 years) having anaemia ................................................................................ 39.4 46.5
Adolescents (15-19 years) having severe anaemia ..................................................................... 5.4 5.1
Pregnant women (15-49 aged) having anaemia .......................................................................... 48.8 51.5
Pregnant women (15-49 aged) having severe anaemia .............................................................. 7.1 8.8
Women (15-49 aged) having anaemia ......................................................................................... 45.2 51.7
Women (15-49 aged) having severe anaemia ............................................................................. 4.8 5.9
Persons (20 years and above) having anaemia ........................................................................... 37.8 42.1
Persons (20 years and above) having Severe anaemia .............................................................. 4.6 4.8","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"","","","",""
"Blood Sugar Level (age 18 years and above) (%)","","","",""
"Blood Sugar Level >140 mg/dl (high) ........................................................................................... 12.9 11.1
Blood Sugar Level >160 mg/dl (very high) ................................................................................... 7.0 5.1","","","",""
"","","","",""
"Hypertension (age 18 years and above) (%)","","","",""
"Above Normal Range (Systolic >140 mm of Hg & Diastolic >90 mm of Hg ) .............................. 23.8 22.8
Moderately High (Systolic >160 mm of Hg & Diastolic >100 mm of Hg ) ..................................... 8.2 7.1
Very High (Systolic >180 mm of Hg & Diastolic >110 mm of Hg ) ............................................... 3.7 3.1","","","",""
"","","","",""
"","","","",""
"14 Any anaemia below 11g/dl, severe anaemia below 7g/dl. 15 Excluding age group 19 years","","","",""
"Chronic Illness :Any person with symptoms persisting for longer than one month is defined as suffering from chronic illness","","","",""
1 0 1 2 3 4
2 DLHS-4 (2012-13) DLHS-3 (2007-08)
3 Indicators TOTAL RURAL TOTAL RURAL
4 Reported Prevalence of Morbidity
5 Any Injury ..................................................................................................................................... 1.9 2.1 Acute Illness ................................................................................................................................. 4.5 5.6 Chronic Illness .............................................................................................................................. 5.1 4.1
6
7
8 Reported Prevalence of Chronic Illness during last one year (%)
9 Disease of respiratory system ...................................................................................................... 11.7 15.0 Disease of cardiovascular system ................................................................................................ 8.9 9.3 Persons suffering from tuberculosis ............................................................................................. 2.2 1.5
10
11
12 Anaemia Status by Haemoglobin Level14 (%)
13 Children (6-59 months) having anaemia ...................................................................................... 68.5 71.9 Children (6-59 months) having severe anaemia .......................................................................... 6.7 9.4 Children (6-9 Years) having anaemia - Male ................................................................................ 67.1 71.4 Children (6-9 Years) having severe anaemia - Male .................................................................... 4.4 2.4 Children (6-9 Years) having anaemia - Female ........................................................................... 52.4 48.8 Children (6-9 Years) having severe anaemia - Female ................................................................ 1.2 0.0 Children (6-14 years) having anaemia - Male ............................................................................. 50.8 62.5 Children (6-14 years) having severe anaemia - Male .................................................................. 3.7 3.6 Children (6-14 years) having anaemia - Female ......................................................................... 48.3 50.0 Children (6-14 years) having severe anaemia - Female .............................................................. 4.3 6.1 Children (10-19 Years15) having anaemia - Male ......................................................................... 37.9 51.2 Children (10-19 Years15) having severe anaemia - Male ............................................................. 3.5 4.0 Children (10-19 Years15) having anaemia - Female ..................................................................... 46.6 52.1 Children (10-19 Years15) having severe anaemia - Female ......................................................... 6.4 6.5 Adolescents (15-19 years) having anaemia ................................................................................ 39.4 46.5 Adolescents (15-19 years) having severe anaemia ..................................................................... 5.4 5.1 Pregnant women (15-49 aged) having anaemia .......................................................................... 48.8 51.5 Pregnant women (15-49 aged) having severe anaemia .............................................................. 7.1 8.8 Women (15-49 aged) having anaemia ......................................................................................... 45.2 51.7 Women (15-49 aged) having severe anaemia ............................................................................. 4.8 5.9 Persons (20 years and above) having anaemia ........................................................................... 37.8 42.1 Persons (20 years and above) having Severe anaemia .............................................................. 4.6 4.8
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35 Blood Sugar Level (age 18 years and above) (%)
36 Blood Sugar Level >140 mg/dl (high) ........................................................................................... 12.9 11.1 Blood Sugar Level >160 mg/dl (very high) ................................................................................... 7.0 5.1
37
38 Hypertension (age 18 years and above) (%)
39 Above Normal Range (Systolic >140 mm of Hg & Diastolic >90 mm of Hg ) .............................. 23.8 22.8 Moderately High (Systolic >160 mm of Hg & Diastolic >100 mm of Hg ) ..................................... 8.2 7.1 Very High (Systolic >180 mm of Hg & Diastolic >110 mm of Hg ) ............................................... 3.7 3.1
40
41
42 14 Any anaemia below 11g/dl, severe anaemia below 7g/dl. 15 Excluding age group 19 years
43 Chronic Illness :Any person with symptoms persisting for longer than one month is defined as suffering from chronic illness

View File

@ -22,8 +22,8 @@ import sys
# sys.path.insert(0, os.path.abspath('..'))
# Insert Camelot's path into the system.
sys.path.insert(0, os.path.abspath(".."))
sys.path.insert(0, os.path.abspath("_themes"))
sys.path.insert(0, os.path.abspath('..'))
sys.path.insert(0, os.path.abspath('_themes'))
import camelot
@ -38,33 +38,33 @@ import camelot
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.napoleon",
"sphinx.ext.intersphinx",
"sphinx.ext.todo",
"sphinx.ext.viewcode",
'sphinx.ext.autodoc',
'sphinx.ext.napoleon',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.viewcode',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = ".rst"
source_suffix = '.rst'
# The encoding of source files.
#
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = "index"
master_doc = 'index'
# General information about the project.
project = u"Camelot"
copyright = u"2021, Camelot Developers"
author = u"Vinayak Mehta"
project = u'Camelot'
copyright = u'2018, Peeply Private Ltd (Singapore)'
author = u'Vinayak Mehta'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
@ -94,7 +94,7 @@ language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ["_build"]
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
@ -114,7 +114,7 @@ add_module_names = True
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "flask_theme_support.FlaskyStyle"
pygments_style = 'flask_theme_support.FlaskyStyle'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
@ -130,18 +130,18 @@ todo_include_todos = True
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = "alabaster"
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
"show_powered_by": False,
"github_user": "camelot-dev",
"github_repo": "camelot",
"github_banner": True,
"show_related": False,
"note_bg": "#FFF59C",
'show_powered_by': False,
'github_user': 'socialcopsdev',
'github_repo': 'camelot',
'github_banner': True,
'show_related': False,
'note_bg': '#FFF59C'
}
# Add any paths that contain custom themes here, relative to this directory.
@ -164,12 +164,12 @@ html_theme_options = {
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = "_static/favicon.ico"
html_favicon = '_static/favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
@ -189,21 +189,10 @@ html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
"index": [
"sidebarintro.html",
"relations.html",
"sourcelink.html",
"searchbox.html",
"hacks.html",
],
"**": [
"sidebarlogo.html",
"localtoc.html",
"relations.html",
"sourcelink.html",
"searchbox.html",
"hacks.html",
],
'index': ['sidebarintro.html', 'relations.html', 'sourcelink.html',
'searchbox.html', 'hacks.html'],
'**': ['sidebarlogo.html', 'localtoc.html', 'relations.html',
'sourcelink.html', 'searchbox.html', 'hacks.html']
}
# Additional templates that should be rendered to pages, maps page names to
@ -260,30 +249,34 @@ html_show_copyright = True
# html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = "Camelotdoc"
htmlhelp_basename = 'Camelotdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, "Camelot.tex", u"Camelot Documentation", u"Vinayak Mehta", "manual"),
(master_doc, 'Camelot.tex', u'Camelot Documentation',
u'Vinayak Mehta', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
@ -323,7 +316,10 @@ latex_documents = [
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [(master_doc, "Camelot", u"Camelot Documentation", [author], 1)]
man_pages = [
(master_doc, 'Camelot', u'Camelot Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#
@ -336,15 +332,9 @@ man_pages = [(master_doc, "Camelot", u"Camelot Documentation", [author], 1)]
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(
master_doc,
"Camelot",
u"Camelot Documentation",
author,
"Camelot",
"One line description of project.",
"Miscellaneous",
),
(master_doc, 'Camelot', u'Camelot Documentation',
author, 'Camelot', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
@ -366,6 +356,6 @@ texinfo_documents = [
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
"https://docs.python.org/2": None,
"http://pandas.pydata.org/pandas-docs/stable": None,
}
'https://docs.python.org/2': None,
'http://pandas.pydata.org/pandas-docs/stable': None
}

View File

@ -7,7 +7,7 @@ If you're reading this, you're probably looking to contributing to Camelot. *Tim
This document will help you get started with contributing documentation, code, testing and filing issues. If you have any questions, feel free to reach out to `Vinayak Mehta`_, the author and maintainer.
.. _Vinayak Mehta: https://www.vinayakmehta.com
.. _Vinayak Mehta: http://vinayak-mehta.github.io
Code Of Conduct
---------------
@ -24,34 +24,30 @@ As the `Requests Code Of Conduct`_ states, **all contributions are welcome**, as
.. _Requests Code Of Conduct: http://docs.python-requests.org/en/master/dev/contributing/#be-cordial
Your first contribution
Your First Contribution
-----------------------
A great way to start contributing to Camelot is to pick an issue tagged with the `help wanted`_ or the `good first issue`_ tags. If you're unable to find a good first issue, feel free to contact the maintainer.
A great way to start contributing to Camelot is to pick an issue tagged with the `Contributor Friendly`_ or the `Easy`_ tags. If you're unable to find a good first issue, feel free to contact the maintainer.
.. _help wanted: https://github.com/camelot-dev/camelot/labels/help%20wanted
.. _good first issue: https://github.com/camelot-dev/camelot/labels/good%20first%20issue
.. _Contributor Friendly: https://github.com/socialcopsdev/camelot/labels/Contributor%20Friendly
.. _Easy: https://github.com/socialcopsdev/camelot/labels/Level%3A%20Easy
Setting up a development environment
------------------------------------
To install the dependencies needed for development, you can use pip::
$ pip install "camelot-py[dev]"
Alternatively, you can clone the project repository, and install using pip::
$ pip install ".[dev]"
$ pip install camelot-py[dev]
Pull Requests
-------------
Submit a pull request
Submit a Pull Request
^^^^^^^^^^^^^^^^^^^^^
The preferred workflow for contributing to Camelot is to fork the `project repository`_ on GitHub, clone, develop on a branch and then finally submit a pull request. Here are the steps:
The preferred workflow for contributing to Camelot is to fork the `project repository`_ on GitHub, clone, develop on a branch and then finally submit a pull request. Steps:
.. _project repository: https://github.com/camelot-dev/camelot
.. _project repository: https://github.com/socialcopsdev/camelot
1. Fork the project repository. Click on the Fork button near the top of the page. This creates a copy of the code under your account on the GitHub.
@ -80,7 +76,7 @@ Now it's time to go to the your fork of Camelot and create a pull request! You c
.. _follow these instructions: https://help.github.com/articles/creating-a-pull-request-from-a-fork/
Work on your pull request
Work on your Pull Request
^^^^^^^^^^^^^^^^^^^^^^^^^
We recommend that your pull request complies with the following guidelines:
@ -93,7 +89,7 @@ We recommend that your pull request complies with the following guidelines:
.. _numpydoc: https://numpydoc.readthedocs.io/en/latest/format.html
- Make sure your commit messages follow `the seven rules of a great git commit message`_:
- Make sure your commit messages follow `the seven rules of a great git commit message`_.
- Separate subject from body with a blank line
- Limit the subject line to 50 characters
- Capitalize the subject line
@ -123,7 +119,7 @@ Writing documentation, function docstrings, examples and tutorials is a great wa
The documentation is written in `reStructuredText`_, with `Sphinx`_ used to generate these lovely HTML files that you're currently reading (unless you're reading this on GitHub). You can edit the documentation using any text editor and then generate the HTML output by running `make html` in the ``docs/`` directory.
The function docstrings are written using the `numpydoc`_ extension for Sphinx. Make sure you check out how its format guidelines before you start writing one.
The function docstrings are written using the `numpydoc`_ extension for Sphinx. Make sure you check out how its format guidelines, before you start writing one.
.. _reStructuredText: https://en.wikipedia.org/wiki/ReStructuredText
.. _Sphinx: http://www.sphinx-doc.org/en/master/
@ -132,14 +128,14 @@ The function docstrings are written using the `numpydoc`_ extension for Sphinx.
Filing Issues
-------------
We use `GitHub issues`_ to keep track of all issues and pull requests. Before opening an issue (which asks a question or reports a bug), please use GitHub search to look for existing issues (both open and closed) that may be similar.
We use `GitHub issues`_ to keep track of all issues and pull requests. Before opening an issue (which asks a question or reports a bug), it is advisable to use GitHub search to look for existing issues (both open and closed) that may be similar.
.. _GitHub issues: https://github.com/camelot-dev/camelot/issues
.. _GitHub issues: https://docs.pytest.org/en/latest/
Questions
^^^^^^^^^
Please don't use GitHub issues for support questions. A better place for them would be `Stack Overflow`_. Make sure you tag them using the ``python-camelot`` tag.
Please don't use GitHub issues for support questions, a better place for them would be `Stack Overflow`_. Make sure you tag them using the ``python-camelot`` tag.
.. _Stack Overflow: http://stackoverflow.com

View File

@ -8,43 +8,25 @@ Camelot: PDF Table Extraction for Humans
Release v\ |version|. (:ref:`Installation <install>`)
.. image:: https://travis-ci.org/camelot-dev/camelot.svg?branch=master
:target: https://travis-ci.org/camelot-dev/camelot
.. image:: https://readthedocs.org/projects/camelot-py/badge/?version=master
:target: https://camelot-py.readthedocs.io/en/master/
:alt: Documentation Status
.. image:: https://codecov.io/github/camelot-dev/camelot/badge.svg?branch=master&service=github
:target: https://codecov.io/github/camelot-dev/camelot?branch=master
.. image:: https://img.shields.io/pypi/v/camelot-py.svg
.. image:: https://img.shields.io/badge/license-MIT-lightgrey.svg
:target: https://pypi.org/project/camelot-py/
.. image:: https://img.shields.io/pypi/l/camelot-py.svg
.. image:: https://img.shields.io/badge/python-2.7-blue.svg
:target: https://pypi.org/project/camelot-py/
.. image:: https://img.shields.io/pypi/pyversions/camelot-py.svg
:target: https://pypi.org/project/camelot-py/
**Camelot** is a Python library which makes it easy for *anyone* to extract tables from PDF files!
.. image:: https://badges.gitter.im/camelot-dev/Lobby.png
:target: https://gitter.im/camelot-dev/Lobby
.. note:: Camelot only works with:
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/ambv/black
- Python 2, with **Python 3** support `on the way`_.
- Text-based PDFs and not scanned documents. If you can click-and-drag to select text in your table in a PDF viewer, then your PDF is text-based. Support for image-based PDFs using **OCR** is `planned`_.
.. image:: https://img.shields.io/badge/continous%20quality-deepsource-lightgrey
:target: https://deepsource.io/gh/camelot-dev/camelot/?ref=repository-badge
.. _on the way: https://github.com/socialcopsdev/camelot/issues/81
.. _planned: https://github.com/socialcopsdev/camelot/issues/101
**Camelot** is a Python library that can help you extract tables from PDFs!
------------------------
.. note:: You can also check out `Excalibur`_, the web interface to Camelot!
.. _Excalibur: https://github.com/camelot-dev/excalibur
----
**Here's how you can extract tables from PDFs.** You can check out the PDF used in this example `here`_.
**Here's how you can extract tables from PDF files.** Check out the PDF used in this example, `here`_.
.. _here: _static/pdf/foo.pdf
@ -53,8 +35,8 @@ Release v\ |version|. (:ref:`Installation <install>`)
>>> import camelot
>>> tables = camelot.read_pdf('foo.pdf')
>>> tables
<TableList n=1>
>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html, markdown, sqlite
<TableList tables=1>
>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html
>>> tables[0]
<Table shape=(7, 7)>
>>> tables[0].parsing_report
@ -64,61 +46,48 @@ Release v\ |version|. (:ref:`Installation <install>`)
'order': 1,
'page': 1
}
>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html, to_markdown, to_sqlite
>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html
>>> tables[0].df # get a pandas DataFrame!
.. csv-table::
:file: _static/csv/foo.csv
Camelot also comes packaged with a :ref:`command-line interface <cli>`!
.. note:: Camelot only works with text-based PDFs and not scanned documents. (As Tabula `explains`_, "If you can click and drag to select text in your table in a PDF viewer, then your PDF is text-based".)
You can check out some frequently asked questions :ref:`here <faq>`.
.. _explains: https://github.com/tabulapdf/tabula#why-tabula
There's a :ref:`command-line interface <cli>` too!
Why Camelot?
------------
- **Configurability**: Camelot gives you control over the table extraction process with :ref:`tweakable settings <advanced>`.
- **Metrics**: You can discard bad tables based on metrics like accuracy and whitespace, without having to manually look at each table.
- **Output**: Each table is extracted into a **pandas DataFrame**, which seamlessly integrates into `ETL and data analysis workflows`_. You can also export tables to multiple formats, which include CSV, JSON, Excel, HTML, Markdown, and Sqlite.
- **You are in control**: Unlike other libraries and tools which either give a nice output or fail miserably (with no in-between), Camelot gives you the power to tweak table extraction. (Since everything in the real world, including PDF table extraction, is fuzzy.)
- **Metrics**: *Bad* tables can be discarded based on metrics like accuracy and whitespace, without ever having to manually look at each table.
- Each table is a **pandas DataFrame**, which enables seamless integration into `ETL and data analysis workflows`_.
- **Export** to multiple formats, including json, excel and html.
- Simple and Elegant API, written in **Python**!
See `comparison with other PDF table extraction libraries and tools`_.
.. _ETL and data analysis workflows: https://gist.github.com/vinayak-mehta/e5949f7c2410a0e12f25d3682dc9e873
See `comparison with similar libraries and tools`_.
.. _comparison with similar libraries and tools: https://github.com/camelot-dev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools
Support the development
-----------------------
If Camelot has helped you, please consider supporting its development with a one-time or monthly donation `on OpenCollective`_!
.. _on OpenCollective: https://opencollective.com/camelot
.. _comparison with other PDF table extraction libraries and tools: https://github.com/socialcopsdev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools
The User Guide
--------------
This part of the documentation begins with some background information about why Camelot was created, takes you through some implementation details, and then focuses on step-by-step instructions for getting the most out of Camelot.
This part of the documentation, begins with some background information about why Camelot was created, takes a small dip into the implementation details and then focuses on step-by-step instructions for getting the most out of Camelot.
.. toctree::
:maxdepth: 2
user/intro
user/install-deps
user/install
user/how-it-works
user/quickstart
user/advanced
user/faq
user/cli
The API Documentation/Guide
---------------------------
The API Documentation / Guide
-----------------------------
If you are looking for information on a specific function, class, or method, this part of the documentation is for you.
If you are looking for information on a specific function, class, or method,
this part of the documentation is for you.
.. toctree::
:maxdepth: 2
@ -128,9 +97,10 @@ If you are looking for information on a specific function, class, or method, thi
The Contributor Guide
---------------------
If you want to contribute to the project, this part of the documentation is for you.
If you want to contribute to the project, this part of the documentation is for
you.
.. toctree::
:maxdepth: 2
dev/contributing
dev/contributing

View File

@ -8,7 +8,7 @@ This page covers some of the more advanced configurations for :ref:`Lattice <lat
Process background lines
------------------------
To detect line segments, :ref:`Lattice <lattice>` needs the lines that make the table to be in the foreground. Here's an example of a table with lines in the background:
To detect line segments, :ref:`Lattice <lattice>` needs the lines that make the table, to be in foreground. Here's an example of a table with lines in background.
.. figure:: ../_static/png/background_lines.png
:scale: 50%
@ -24,34 +24,25 @@ To process background lines, you can pass ``process_background=True``.
>>> tables = camelot.read_pdf('background_lines.pdf', process_background=True)
>>> tables[1].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -back background_lines.pdf
.. csv-table::
:file: ../_static/csv/background_lines.csv
Visual debugging
----------------
Plot geometry
-------------
.. note:: Visual debugging using ``plot()`` requires `matplotlib <https://matplotlib.org/>`_ which is an optional dependency. You can install it using ``$ pip install camelot-py[plot]``.
You can use a :class:`table <camelot.core.Table>` object's :meth:`plot() <camelot.core.TableList.plot>` method to plot various geometries that were detected by Camelot while processing the PDF page. This can help you select table areas, column separators and debug bad table outputs, by tweaking different configuration parameters.
You can use the :class:`plot() <camelot.plotting.PlotMethods>` method to generate a `matplotlib <https://matplotlib.org/>`_ plot of various elements that were detected on the PDF page while processing it. This can help you select table areas, column separators and debug bad table outputs, by tweaking different configuration parameters.
You can specify the type of element you want to plot using the ``kind`` keyword argument. The generated plot can be saved to a file by passing a ``filename`` keyword argument. The following plot types are supported:
The following geometries are available for plotting. You can pass them to the :meth:`plot() <camelot.core.TableList.plot>` method, which will then generate a `matplotlib <https://matplotlib.org/>`_ plot for the passed geometry.
- 'text'
- 'grid'
- 'table'
- 'contour'
- 'line'
- 'joint'
- 'textedge'
.. note:: 'line' and 'joint' can only be used with :ref:`Lattice <lattice>` and 'textedge' can only be used with :ref:`Stream <stream>`.
.. note:: The last three geometries can only be used with :ref:`Lattice <lattice>`, i.e. when ``flavor='lattice'``.
Let's generate a plot for each type using this `PDF <../_static/pdf/foo.pdf>`__ as an example. First, let's get all the tables out.
Let's generate a plot for each geometry using this `PDF <../_static/pdf/foo.pdf>`__ as an example. First, let's get all the tables out.
::
@ -59,6 +50,8 @@ Let's generate a plot for each type using this `PDF <../_static/pdf/foo.pdf>`__
>>> tables
<TableList n=1>
.. _geometry_text:
text
^^^^
@ -66,41 +59,31 @@ Let's plot all the text present on the table's PDF page.
::
>>> camelot.plot(tables[0], kind='text').show()
>>> tables[0].plot('text')
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -plot text foo.pdf
.. figure:: ../_static/png/plot_text.png
.. figure:: ../_static/png/geometry_text.png
:height: 674
:width: 1366
:scale: 50%
:alt: A plot of all text on a PDF page
:align: left
This, as we shall later see, is very helpful with :ref:`Stream <stream>` for noting table areas and column separators, in case Stream does not guess them correctly.
This, as we shall later see, is very helpful with :ref:`Stream <stream>`, for noting table areas and column separators, in case Stream does not guess them correctly.
.. note:: The *x-y* coordinates shown above change as you move your mouse cursor on the image, which can help you note coordinates.
.. note:: The *x-y* coordinates shown aboe change as you move your mouse cursor on the image, which can help you note coordinates.
.. _geometry_table:
table
^^^^^
Let's plot the table (to see if it was detected correctly or not). This plot type, along with contour, line and joint is useful for debugging and improving the extraction output, in case the table wasn't detected correctly. (More on that later.)
Let's plot the table (to see if it was detected correctly or not). This geometry type, along with contour, line and joint is useful for debugging and improving the extraction output, in case the table wasn't detected correctly. More on that later.
::
>>> camelot.plot(tables[0], kind='grid').show()
>>> tables[0].plot('table')
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -plot grid foo.pdf
.. figure:: ../_static/png/plot_table.png
.. figure:: ../_static/png/geometry_table.png
:height: 674
:width: 1366
:scale: 50%
@ -109,6 +92,8 @@ Let's plot the table (to see if it was detected correctly or not). This plot typ
The table is perfect!
.. _geometry_contour:
contour
^^^^^^^
@ -116,21 +101,17 @@ Now, let's plot all table boundaries present on the table's PDF page.
::
>>> camelot.plot(tables[0], kind='contour').show()
>>> tables[0].plot('contour')
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -plot contour foo.pdf
.. figure:: ../_static/png/plot_contour.png
.. figure:: ../_static/png/geometry_contour.png
:height: 674
:width: 1366
:scale: 50%
:alt: A plot of all contours on a PDF page
:align: left
.. _geometry_line:
line
^^^^
@ -138,21 +119,17 @@ Cool, let's plot all line segments present on the table's PDF page.
::
>>> camelot.plot(tables[0], kind='line').show()
>>> tables[0].plot('line')
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -plot line foo.pdf
.. figure:: ../_static/png/plot_line.png
.. figure:: ../_static/png/geometry_line.png
:height: 674
:width: 1366
:scale: 50%
:alt: A plot of all lines on a PDF page
:align: left
.. _geometry_joint:
joint
^^^^^
@ -160,111 +137,50 @@ Finally, let's plot all line intersections present on the table's PDF page.
::
>>> camelot.plot(tables[0], kind='joint').show()
>>> tables[0].plot('joint')
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -plot joint foo.pdf
.. figure:: ../_static/png/plot_joint.png
.. figure:: ../_static/png/geometry_joint.png
:height: 674
:width: 1366
:scale: 50%
:alt: A plot of all line intersections on a PDF page
:align: left
textedge
^^^^^^^^
You can also visualize the textedges found on a page by specifying ``kind='textedge'``. To know more about what a "textedge" is, you can see pages 20, 35 and 40 of `Anssi Nurminen's master's thesis <http://dspace.cc.tut.fi/dpub/bitstream/handle/123456789/21520/Nurminen.pdf?sequence=3>`_.
::
>>> camelot.plot(tables[0], kind='textedge').show()
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot stream -plot textedge foo.pdf
.. figure:: ../_static/png/plot_textedge.png
:height: 674
:width: 1366
:scale: 50%
:alt: A plot of relevant textedges on a PDF page
:align: left
Specify table areas
-------------------
In cases such as `these <../_static/pdf/table_areas.pdf>`__, it can be useful to specify exact table boundaries. You can plot the text on this page and note the top left and bottom right coordinates of the table.
Since :ref:`Stream <stream>` treats the whole page as a table, `for now`_, it's useful to specify table boundaries in cases such as `these <../_static/pdf/table_areas.pdf>`__. You can :ref:`plot the text <geometry_text>` on this page and note the left-top and right-bottom coordinates of the table.
Table areas that you want Camelot to analyze can be passed as a list of comma-separated strings to :meth:`read_pdf() <camelot.read_pdf>`, using the ``table_areas`` keyword argument.
.. _for now: https://github.com/socialcopsdev/camelot/issues/102
::
>>> tables = camelot.read_pdf('table_areas.pdf', flavor='stream', table_areas=['316,499,566,337'])
>>> tables[0].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot stream -T 316,499,566,337 table_areas.pdf
.. csv-table::
:file: ../_static/csv/table_areas.csv
.. note:: ``table_areas`` accepts strings of the form x1,y1,x2,y2 where (x1, y1) -> top-left and (x2, y2) -> bottom-right in PDF coordinate space. In PDF coordinate space, the bottom-left corner of the page is the origin, with coordinates (0, 0).
Specify table regions
---------------------
However there may be cases like `[1] <../_static/pdf/table_regions.pdf>`__ and `[2] <https://github.com/camelot-dev/camelot/blob/master/tests/files/tableception.pdf>`__, where the table might not lie at the exact coordinates every time but in an approximate region.
You can use the ``table_regions`` keyword argument to :meth:`read_pdf() <camelot.read_pdf>` to solve for such cases. When ``table_regions`` is specified, Camelot will only analyze the specified regions to look for tables.
::
>>> tables = camelot.read_pdf('table_regions.pdf', table_regions=['170,370,560,270'])
>>> tables[0].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -R 170,370,560,270 table_regions.pdf
.. csv-table::
:file: ../_static/csv/table_regions.csv
Specify column separators
-------------------------
In cases like `these <../_static/pdf/column_separators.pdf>`__, where the text is very close to each other, it is possible that Camelot may guess the column separators' coordinates incorrectly. To correct this, you can explicitly specify the *x* coordinate for each column separator by plotting the text on the page.
In cases like `these <../_static/pdf/column_separators.pdf>`__, where the text is very close to each other, it is possible that Camelot may guess the column separators' coordinates incorrectly. To correct this, you can explicitly specify the *x* coordinate for each column separator by :ref:`plotting the text <geometry_text>` on the page.
You can pass the column separators as a list of comma-separated strings to :meth:`read_pdf() <camelot.read_pdf>`, using the ``columns`` keyword argument.
In case you passed a single column separators string list, and no table area is specified, the separators will be applied to the whole page. When a list of table areas is specified and you need to specify column separators as well, **the length of both lists should be equal**. Each table area will be mapped to each column separators' string using their indices.
In case you passed a single column separators string list, and no table area is specified, the separators will be applied to the whole page. When a list of table areas is specified and there is a need to specify column separators as well, **the length of both lists should be equal**. Each table area will be mapped to each column separators' string using their indices.
For example, if you have specified two table areas, ``table_areas=['12,54,43,23', '20,67,55,33']``, and only want to specify column separators for the first table, you can pass an empty string for the second table in the column separators' list like this, ``columns=['10,120,200,400', '']``.
For example, if you have specified two table areas, ``table_areas=['12,23,43,54', '20,33,55,67']``, and only want to specify column separators for the first table, you can pass an empty string for the second table in the column separators' list, like this, ``columns=['10,120,200,400', '']``.
Let's get back to the *x* coordinates we got from plotting the text that exists on this `PDF <../_static/pdf/column_separators.pdf>`__, and get the table out!
Let's get back to the *x* coordinates we got from :ref:`plotting text <geometry_text>` that exists on this `PDF <../_static/pdf/column_separators.pdf>`__, and get the table out!
::
>>> tables = camelot.read_pdf('column_separators.pdf', flavor='stream', columns=['72,95,209,327,442,529,566,606,683'])
>>> tables[0].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot stream -C 72,95,209,327,442,529,566,606,683 column_separators.pdf
.. csv-table::
"...","...","...","...","...","...","...","...","...","..."
@ -272,24 +188,18 @@ Let's get back to the *x* coordinates we got from plotting the text that exists
"NUMBER TYPE DBA NAME","","","LICENSEE NAME","ADDRESS","CITY","ST","ZIP","PHONE NUMBER","EXPIRES"
"...","...","...","...","...","...","...","...","...","..."
Ah! Since `PDFMiner <https://euske.github.io/pdfminer/>`_ merged the strings, "NUMBER", "TYPE" and "DBA NAME", all of them were assigned to the same cell. Let's see how we can fix this in the next section.
Ah! Since `PDFMiner <https://euske.github.io/pdfminer/>`_ merged the strings, "NUMBER", "TYPE" and "DBA NAME"; all of them were assigned to the same cell. Let's see how we can fix this in the next section.
Split text along separators
---------------------------
To deal with cases like the output from the previous section, you can pass ``split_text=True`` to :meth:`read_pdf() <camelot.read_pdf>`, which will split any strings that lie in different cells but have been assigned to a single cell (as a result of being merged together by `PDFMiner <https://euske.github.io/pdfminer/>`_).
To deal with cases like the output from the previous section, you can pass ``split_text=True`` to :meth:`read_pdf() <camelot.read_pdf>`, which will split any strings that lie in different cells but have been assigned to the a single cell (as a result of being merged together by `PDFMiner <https://euske.github.io/pdfminer/>`_).
::
>>> tables = camelot.read_pdf('column_separators.pdf', flavor='stream', columns=['72,95,209,327,442,529,566,606,683'], split_text=True)
>>> tables[0].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot -split stream -C 72,95,209,327,442,529,566,606,683 column_separators.pdf
.. csv-table::
"...","...","...","...","...","...","...","...","...","..."
@ -300,29 +210,23 @@ To deal with cases like the output from the previous section, you can pass ``spl
Flag superscripts and subscripts
--------------------------------
There might be cases where you want to differentiate between the text and superscripts or subscripts, like this `PDF <../_static/pdf/superscript.pdf>`_.
There might be cases where you want to differentiate between the text, and superscripts or subscripts, like this `PDF <../_static/pdf/superscript.pdf>`_.
.. figure:: ../_static/png/superscript.png
:alt: A PDF with superscripts
:align: left
In this case, the text that `other tools`_ return, will be ``24.912``. This is relatively harmless when that decimal point is involved. But when it isn't there, you'll be left wondering why the results of your data analysis are 10x bigger!
In this case, the text that `other tools`_ return, will be ``24.912``. This is harmless as long as there is that decimal point involved. But when it isn't there, you'll be left wondering why the results of your data analysis were 10x bigger!
You can solve this by passing ``flag_size=True``, which will enclose the superscripts and subscripts with ``<s></s>``, based on font size, as shown below.
.. _other tools: https://github.com/camelot-dev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools
.. _other tools: https://github.com/socialcopsdev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools
::
>>> tables = camelot.read_pdf('superscript.pdf', flavor='stream', flag_size=True)
>>> tables[0].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot -flag stream superscript.pdf
.. csv-table::
"...","...","...","...","...","...","...","...","...","...","..."
@ -331,85 +235,10 @@ You can solve this by passing ``flag_size=True``, which will enclose the supersc
"Madhya Pradesh","27.13","23.57","-","-","3.56","0.38","-","1.86","-","1.28"
"...","...","...","...","...","...","...","...","...","...","..."
Strip characters from text
--------------------------
Control how text is grouped into rows
-------------------------------------
You can strip unwanted characters like spaces, dots and newlines from a string using the ``strip_text`` keyword argument. Take a look at `this PDF <https://github.com/camelot-dev/camelot/blob/master/tests/files/tabula/12s0324.pdf>`_ as an example, the text at the start of each row contains a lot of unwanted spaces, dots and newlines.
::
>>> tables = camelot.read_pdf('12s0324.pdf', flavor='stream', strip_text=' .\n')
>>> tables[0].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot -strip ' .\n' stream 12s0324.pdf
.. csv-table::
"...","...","...","...","...","...","...","...","...","..."
"Forcible rape","17.5","2.6","14.9","17.2","2.5","14.7","","",""
"Robbery","102.1","25.5","76.6","90.0","22.9","67.1","12.1","2.5","9.5"
"Aggravated assault","338.4","40.1","298.3","264.0","30.2","233.8","74.4","9.9","64.5"
"Property crime","1,396 .4","338 .7","1,057 .7","875 .9","210 .8","665 .1","608 .2","127 .9","392 .6"
"Burglary","240.9","60.3","180.6","205.0","53.4","151.7","35.9","6.9","29.0"
"...","...","...","...","...","...","...","...","...","..."
Improve guessed table areas
---------------------------
While using :ref:`Stream <stream>`, automatic table detection can fail for PDFs like `this one <https://github.com/camelot-dev/camelot/blob/master/tests/files/edge_tol.pdf>`_. That's because the text is relatively far apart vertically, which can lead to shorter textedges being calculated.
.. note:: To know more about how textedges are calculated to guess table areas, you can see pages 20, 35 and 40 of `Anssi Nurminen's master's thesis <http://dspace.cc.tut.fi/dpub/bitstream/handle/123456789/21520/Nurminen.pdf?sequence=3>`_.
Let's see the table area that is detected by default.
::
>>> tables = camelot.read_pdf('edge_tol.pdf', flavor='stream')
>>> camelot.plot(tables[0], kind='contour').show()
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot stream -plot contour edge.pdf
.. figure:: ../_static/png/edge_tol_1.png
:height: 674
:width: 1366
:scale: 50%
:alt: Table area with default edge_tol
:align: left
To improve the detected area, you can increase the ``edge_tol`` (default: 50) value to counter the effect of text being placed relatively far apart vertically. Larger ``edge_tol`` will lead to longer textedges being detected, leading to an improved guess of the table area. Let's use a value of 500.
::
>>> tables = camelot.read_pdf('edge_tol.pdf', flavor='stream', edge_tol=500)
>>> camelot.plot(tables[0], kind='contour').show()
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot stream -e 500 -plot contour edge.pdf
.. figure:: ../_static/png/edge_tol_2.png
:height: 674
:width: 1366
:scale: 50%
:alt: Table area with default edge_tol
:align: left
As you can see, the guessed table area has improved!
Improve guessed table rows
--------------------------
You can pass ``row_tol=<+int>`` to group the rows closer together, as shown below.
You can pass ``row_close_tol=<+int>`` to group the rows closer together, as shown below.
::
@ -427,15 +256,9 @@ You can pass ``row_tol=<+int>`` to group the rows closer together, as shown belo
::
>>> tables = camelot.read_pdf('group_rows.pdf', flavor='stream', row_tol=10)
>>> tables = camelot.read_pdf('group_rows.pdf', flavor='stream', row_close_tol=10)
>>> tables[0].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot stream -r 10 group_rows.pdf
.. csv-table::
"Clave","Nombre Entidad","Clave","","Nombre Municipio","Clave","Nombre Localidad"
@ -447,11 +270,11 @@ You can pass ``row_tol=<+int>`` to group the rows closer together, as shown belo
Detect short lines
------------------
There might be cases while using :ref:`Lattice <lattice>` when smaller lines don't get detected. The size of the smallest line that gets detected is calculated by dividing the PDF page's dimensions with a scaling factor called ``line_scale``. By default, its value is 15.
There might be cases while using :ref:`Lattice <lattice>` when smaller lines don't get detected. The size of the smallest line that gets detected is calculated by dividing the PDF page's dimensions with a scaling factor called ``line_size_scaling``. By default, its value is 15.
As you can guess, the larger the ``line_scale``, the smaller the size of lines getting detected.
As you can guess, the larger the ``line_size_scaling``, the smaller the size of lines getting detected.
.. warning:: Making ``line_scale`` very large (>150) will lead to text getting detected as lines.
.. warning:: Making ``line_size_scaling`` very large (>150) will lead to text getting detected as lines.
Here's a `PDF <../_static/pdf/short_lines.pdf>`__ where small lines separating the the headers don't get detected with the default value of 15.
@ -459,29 +282,23 @@ Here's a `PDF <../_static/pdf/short_lines.pdf>`__ where small lines separating t
:alt: A PDF table with short lines
:align: left
Let's plot the table for this PDF.
Let's :ref:`plot the table <geometry_table>` for this PDF.
::
>>> tables = camelot.read_pdf('short_lines.pdf')
>>> camelot.plot(tables[0], kind='grid').show()
>>> tables[0].plot('table')
.. figure:: ../_static/png/short_lines_1.png
:alt: A plot of the PDF table with short lines
:align: left
Clearly, the smaller lines separating the headers, couldn't be detected. Let's try with ``line_scale=40``, and plot the table again.
Clearly, the smaller lines separating the headers, couldn't be detected. Let's try with ``line_size_scaling=40``, and `plot the table <geometry_table>`_ again.
::
>>> tables = camelot.read_pdf('short_lines.pdf', line_scale=40)
>>> camelot.plot(tables[0], kind='grid').show()
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -scale 40 -plot grid short_lines.pdf
>>> tables = camelot.read_pdf('short_lines.pdf', line_size_scaling=40)
>>> tables[0].plot('table')
.. figure:: ../_static/png/short_lines_2.png
:alt: An improved plot of the PDF table with short lines
@ -510,7 +327,7 @@ Voila! Camelot can now see those lines. Let's get our table.
Shift text in spanning cells
----------------------------
By default, the :ref:`Lattice <lattice>` method shifts text in spanning cells, first to the left and then to the top, as you can observe in the output table above. However, this behavior can be changed using the ``shift_text`` keyword argument. Think of it as setting the *gravity* for a table it decides the direction in which the text will move and finally come to rest.
By default, the :ref:`Lattice <lattice>` method shifts text in spanning cells, first to the left and then to the top, as you can observe in the output table above. However, this behavior can be changed using the ``shift_text`` keyword argument. Think of it as setting the *gravity* for a table, it decides the direction in which the text will move and finally come to rest.
``shift_text`` expects a list with one or more characters from the following set: ``('', l', 'r', 't', 'b')``, which are then applied *in order*. The default, as we discussed above, is ``['l', 't']``.
@ -522,7 +339,7 @@ We'll use the `PDF <../_static/pdf/short_lines.pdf>`__ from the previous example
::
>>> tables = camelot.read_pdf('short_lines.pdf', line_scale=40, shift_text=[''])
>>> tables = camelot.read_pdf('short_lines.pdf', line_size_scaling=40, shift_text=[''])
>>> tables[0].df
.. csv-table::
@ -539,19 +356,13 @@ We'll use the `PDF <../_static/pdf/short_lines.pdf>`__ from the previous example
"Knowledge &Practices on HTN &","2400","Men (≥ 18 yrs)","-","-","-","1728"
"DM","2400","Women (≥ 18 yrs)","-","-","-","1728"
No surprises there it did remain in place (observe the strings "2400" and "All the available individuals"). Let's pass ``shift_text=['r', 'b']`` to set the *gravity* to right-bottom and move the text in that direction.
No surprises there, it did remain in place (observe the strings "2400" and "All the available individuals"). Let's pass ``shift_text=['r', 'b']``, to set the *gravity* to right-bottom, and move the text in that direction.
::
>>> tables = camelot.read_pdf('short_lines.pdf', line_scale=40, shift_text=['r', 'b'])
>>> tables = camelot.read_pdf('short_lines.pdf', line_size_scaling=40, shift_text=['r', 'b'])
>>> tables[0].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -scale 40 -shift r -shift b short_lines.pdf
.. csv-table::
"Investigations","No. ofHHs","Age/Sex/Physiological Group","Preva-lence","C.I*","RelativePrecision","Sample sizeper State"
@ -569,7 +380,7 @@ No surprises there — it did remain in place (observe the strings "2400" and "A
Copy text in spanning cells
---------------------------
You can copy text in spanning cells when using :ref:`Lattice <lattice>`, in either the horizontal or vertical direction, or both. This behavior is disabled by default.
You can copy text in spanning cells when using :ref:`Lattice <lattice>`, in either horizontal or vertical direction, or both. This behavior is disabled by default.
``copy_text`` expects a list with one or more characters from the following set: ``('v', 'h')``, which are then applied *in order*.
@ -597,12 +408,6 @@ We don't need anything else. Now, let's pass ``copy_text=['v']`` to copy text in
>>> tables = camelot.read_pdf('copy_text.pdf', copy_text=['v'])
>>> tables[0].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -copy v copy_text.pdf
.. csv-table::
"Sl. No.","Name of State/UT","Name of District","Disease/ Illness","No. of Cases","No. of Deaths","Date of start of outbreak","Date of reporting","Current Status","..."
@ -611,38 +416,4 @@ We don't need anything else. Now, let's pass ``copy_text=['v']`` to copy text in
"3","Odisha","Kalahandi","iii. Food Poisoning","42","0","02/01/14","03/01/14","Under control","..."
"4","West Bengal","West Medinipur","iv. Acute Diarrhoeal Disease","145","0","04/01/14","05/01/14","Under control","..."
"4","West Bengal","Birbhum","v. Food Poisoning","199","0","31/12/13","31/12/13","Under control","..."
"4","West Bengal","Howrah","vi. Viral Hepatitis A &E","85","0","26/12/13","27/12/13","Under surveillance","..."
Tweak layout generation
-----------------------
Camelot is built on top of PDFMiner's functionality of grouping characters on a page into words and sentences. In some cases (such as `#170 <https://github.com/camelot-dev/camelot/issues/170>`_ and `#215 <https://github.com/camelot-dev/camelot/issues/215>`_), PDFMiner can group characters that should belong to the same sentence into separate sentences.
To deal with such cases, you can tweak PDFMiner's `LAParams kwargs <https://github.com/euske/pdfminer/blob/master/pdfminer/layout.py#L33>`_ to improve layout generation, by passing the keyword arguments as a dict using ``layout_kwargs`` in :meth:`read_pdf() <camelot.read_pdf>`. To know more about the parameters you can tweak, you can check out `PDFMiner docs <https://pdfminersix.rtfd.io/en/latest/reference/composable.html>`_.
::
>>> tables = camelot.read_pdf('foo.pdf', layout_kwargs={'detect_vertical': False})
.. _image-conversion-backend:
Use alternate image conversion backends
---------------------------------------
When using the :ref:`Lattice <lattice>` flavor, Camelot uses ``ghostscript`` to convert PDF pages to images for line recognition. If you face installation issues with ``ghostscript``, you can use an alternate image conversion backend called ``poppler``. You can specify which image conversion backend you want to use with::
>>> tables = camelot.read_pdf(filename, backend="ghostscript") # default
>>> tables = camelot.read_pdf(filename, backend="poppler")
.. note:: ``ghostscript`` will be replaced by ``poppler`` as the default image conversion backend in ``v0.12.0``.
If you face issues with both ``ghostscript`` and ``poppler``, you can supply your own image conversion backend::
>>> class ConversionBackend(object):
>>> def convert(pdf_path, png_path):
>>> # read pdf page from pdf_path
>>> # convert pdf page to image
>>> # write image to png_path
>>> pass
>>>
>>> tables = camelot.read_pdf(filename, backend=ConversionBackend())
"4","West Bengal","Howrah","vi. Viral Hepatitis A &E","85","0","26/12/13","27/12/13","Under surveillance","..."

View File

@ -1,24 +1,22 @@
.. _cli:
Command-Line Interface
Command-line interface
======================
Camelot comes with a command-line interface.
You can print the help for the interface by typing ``camelot --help`` in your favorite terminal program, as shown below. Furthermore, you can print the help for each command by typing ``camelot <command> --help``. Try it out!
You can print the help for the interface, by typing ``camelot --help`` in your favorite terminal program, as shown below. Furthermore, you can print the help for each command, by typing ``camelot <command> --help``, try it out!
::
Usage: camelot [OPTIONS] COMMAND [ARGS]...
Camelot: PDF Table Extraction for Humans
Camelot: PDF Table Extraction for Humans
Options:
--version Show the version and exit.
-q, --quiet TEXT Suppress logs and warnings.
-p, --pages TEXT Comma-separated page numbers. Example: 1,3,4
or 1,4-end.
-pw, --password TEXT Password for decryption.
-o, --output TEXT Output file path.
-f, --format [csv|json|excel|html]
Output file format.
@ -26,8 +24,6 @@ You can print the help for the interface by typing ``camelot --help`` in your fa
-split, --split_text Split text that spans across multiple cells.
-flag, --flag_size Flag text based on font size. Useful to
detect super/subscripts.
-strip, --strip_text Characters that should be stripped from a
string before assigning it to a cell.
-M, --margins <FLOAT FLOAT FLOAT>...
PDFMiner char_margin, line_margin and
word_margin.
@ -35,4 +31,4 @@ You can print the help for the interface by typing ``camelot --help`` in your fa
Commands:
lattice Use lines between text to parse the table.
stream Use spaces between text to parse the table.
stream Use spaces between text to parse the table.

View File

@ -1,70 +0,0 @@
.. _faq:
Frequently Asked Questions
==========================
This part of the documentation answers some common questions. To add questions, please open an issue `here <https://github.com/camelot-dev/camelot/issues/new>`_.
Does Camelot work with image-based PDFs?
----------------------------------------
**No**, Camelot only works with text-based PDFs and not scanned documents. (As Tabula `explains <https://github.com/tabulapdf/tabula#why-tabula>`_, "If you can click and drag to select text in your table in a PDF viewer, then your PDF is text-based".)
How to reduce memory usage for long PDFs?
-----------------------------------------
During table extraction from long PDF documents, RAM usage can grow significantly.
A simple workaround is to divide the extraction into chunks, and save extracted data to disk at the end of every chunk.
For more details, check out this code snippet from `@anakin87 <https://github.com/anakin87>`_:
::
import camelot
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i : i + n]
def extract_tables(filepath, pages, chunks=50, export_path=".", params={}):
"""
Divide the extraction work into n chunks. At the end of every chunk,
save data on disk and free RAM.
filepath : str
Filepath or URL of the PDF file.
pages : str, optional (default: '1')
Comma-separated page numbers.
Example: '1,3,4' or '1,4-end' or 'all'.
"""
# get list of pages from camelot.handlers.PDFHandler
handler = camelot.handlers.PDFHandler(filepath)
page_list = handler._get_pages(filepath, pages=pages)
# chunk pages list
page_chunks = list(chunks(page_list, chunks))
# extraction and export
for chunk in page_chunks:
pages_string = str(chunk).replace("[", "").replace("]", "")
tables = camelot.read_pdf(filepath, pages=pages_string, **params)
tables.export(f"{export_path}/tables.csv")
How can I supply my own image conversion backend to Lattice?
------------------------------------------------------------
When using the :ref:`Lattice <lattice>` flavor, you can supply your own :ref:`image conversion backend <image-conversion-backend>` by creating a class with a ``convert`` method as follows::
>>> class ConversionBackend(object):
>>> def convert(pdf_path, png_path):
>>> # read pdf page from pdf_path
>>> # convert pdf page to image
>>> # write image to png_path
>>> pass
>>>
>>> tables = camelot.read_pdf(filename, backend=ConversionBackend())

View File

@ -3,35 +3,35 @@
How It Works
============
This part of the documentation includes a high-level explanation of how Camelot extracts tables from PDF files.
This part of the documentation details a high-level explanation of how Camelot extracts tables from PDF files.
You can choose between two table parsing methods, *Stream* and *Lattice*. These names for parsing methods inside Camelot were inspired from `Tabula <https://github.com/tabulapdf/tabula>`_.
You can choose between two table parsing methods, *Stream* and *Lattice*. The naming for parsing methods inside Camelot (i.e. Stream and Lattice) was inspired from `Tabula`_.
.. _Tabula: https://github.com/tabulapdf/tabula
.. _stream:
Stream
------
Stream can be used to parse tables that have whitespaces between cells to simulate a table structure. It is built on top of PDFMiner's functionality of grouping characters on a page into words and sentences, using `margins <https://euske.github.io/pdfminer/#tools>`_.
Stream can be used to parse tables that have whitespaces between cells to simulate a table structure. It looks for these spaces between text to form a table representation.
1. Words on the PDF page are grouped into text rows based on their *y* axis overlaps.
It is built on top of PDFMiner's functionality of grouping characters on a page into words and sentences, using `margins`_. After getting the words given on a page, it groups them into rows based on their *y* coordinates and tries to guess the number of columns the table might have by calculating the mode of the number of words in each row. This mode is used to calculate *x* ranges for the table's columns. It then adds columns to this column range list based on any words that may lie outside or inside the current column *x* ranges.
2. Textedges are calculated and then used to guess interesting table areas on the PDF page. You can read `Anssi Nurminen's master's thesis <https://pdfs.semanticscholar.org/a9b1/67a86fb189bfcd366c3839f33f0404db9c10.pdf>`_ to know more about this table detection technique. [See pages 20, 35 and 40]
.. _margins: https://euske.github.io/pdfminer/#tools
3. The number of columns inside each table area are then guessed. This is done by calculating the mode of number of words in each text row. Based on this mode, words in each text row are chosen to calculate a list of column *x* ranges.
.. note:: By default, Stream treats the whole PDF page as a table, which isn't ideal when there are more than two tables on a page with different number of columns. Automatic table detection for Stream is `in the works`_.
4. Words that lie inside/outside the current column *x* ranges are then used to extend the current list of columns.
5. Finally, a table is formed using the text rows' *y* ranges and column *x* ranges and words found on the page are assigned to the table's cells based on their *x* and *y* coordinates.
.. _in the works: https://github.com/socialcopsdev/camelot/issues/102
.. _lattice:
Lattice
-------
Lattice is more deterministic in nature, and it does not rely on guesses. It can be used to parse tables that have demarcated lines between cells, and it can automatically parse multiple tables present on a page.
Lattice is more deterministic in nature, and does not rely on guesses. It can be used to parse tables that have demarcated lines between cells, and can automatically parse multiple tables present on a page.
It starts by converting the PDF page to an image using ghostscript, and then processes it to get horizontal and vertical line segments by applying a set of morphological transformations (erosion and dilation) using OpenCV.
It starts by converting the PDF page to an image using ghostscript and then processing it to get horizontal and vertical line segments by applying a set of morphological transformations (erosion and dilation) using OpenCV.
Let's see how Lattice processes the second page of `this PDF`_, step-by-step.
@ -39,7 +39,7 @@ Let's see how Lattice processes the second page of `this PDF`_, step-by-step.
1. Line segments are detected.
.. image:: ../_static/png/plot_line.png
.. image:: ../_static/png/geometry_line.png
:height: 674
:width: 1366
:scale: 50%
@ -49,23 +49,23 @@ Let's see how Lattice processes the second page of `this PDF`_, step-by-step.
.. _and: https://en.wikipedia.org/wiki/Logical_conjunction
.. image:: ../_static/png/plot_joint.png
.. image:: ../_static/png/geometry_joint.png
:height: 674
:width: 1366
:scale: 50%
:align: left
3. Table boundaries are computed by overlapping the detected line segments again, this time by "`or`_"ing their pixel intensities.
3. Table boundaries are computed, by overlapping the detected line segments again, this time by "`or`_"ing their pixel intensities.
.. _or: https://en.wikipedia.org/wiki/Logical_disjunction
.. image:: ../_static/png/plot_contour.png
.. image:: ../_static/png/geometry_contour.png
:height: 674
:width: 1366
:scale: 50%
:align: left
4. Since dimensions of the PDF page and its image vary, the detected table boundaries, line intersections, and line segments are scaled and translated to the PDF page's coordinate space, and a representation of the table is created.
4. Since dimensions of the PDF page and its image vary; the detected table boundaries, line intersections and line segments are scaled and translated to the PDF page's coordinate space, and a representation of the table is created.
.. image:: ../_static/png/table.png
:height: 674
@ -75,10 +75,10 @@ Let's see how Lattice processes the second page of `this PDF`_, step-by-step.
5. Spanning cells are detected using the line segments and line intersections.
.. image:: ../_static/png/plot_table.png
.. image:: ../_static/png/geometry_table.png
:height: 674
:width: 1366
:scale: 50%
:align: left
6. Finally, the words found on the page are assigned to the table's cells based on their *x* and *y* coordinates.
6. Finally, the words found on the page are assigned to the table's cells based on their *x* and *y* coordinates.

View File

@ -1,62 +0,0 @@
.. _install_deps:
Installation of dependencies
============================
The dependencies `Ghostscript <https://www.ghostscript.com>`_ and `Tkinter <https://wiki.python.org/moin/TkInter>`_ can be installed using your system's package manager or by running their installer.
OS-specific instructions
------------------------
Ubuntu
^^^^^^
::
$ apt install ghostscript python3-tk
MacOS
^^^^^
::
$ brew install ghostscript tcl-tk
Windows
^^^^^^^
For Ghostscript, you can get the installer at their `downloads page <https://www.ghostscript.com/download/gsdnld.html>`_. And for Tkinter, you can download the `ActiveTcl Community Edition <https://www.activestate.com/activetcl/downloads>`_ from ActiveState.
Checks to see if dependencies are installed correctly
-----------------------------------------------------
You can run the following checks to see if the dependencies were installed correctly.
For Ghostscript
^^^^^^^^^^^^^^^
Open the Python REPL and run the following:
For Ubuntu/MacOS::
>>> from ctypes.util import find_library
>>> find_library("gs")
"libgs.so.9"
For Windows::
>>> import ctypes
>>> from ctypes.util import find_library
>>> find_library("".join(("gsdll", str(ctypes.sizeof(ctypes.c_voidp) * 8), ".dll")))
<name-of-ghostscript-library-on-windows>
**Check:** The output of the ``find_library`` function should not be empty.
If the output is empty, then it's possible that the Ghostscript library is not available one of the ``LD_LIBRARY_PATH``/``DYLD_LIBRARY_PATH``/``PATH`` variables depending on your operating system. In this case, you may have to modify one of those path variables.
For Tkinter
^^^^^^^^^^^
Launch Python and then import Tkinter::
>>> import tkinter
**Check:** Importing ``tkinter`` should not raise an import error.

View File

@ -3,38 +3,40 @@
Installation of Camelot
=======================
This part of the documentation covers the steps to install Camelot.
This part of the documentation covers the installation of Camelot. First, you'll need to install the dependencies, which include `tk`_ and `ghostscript`_.
After :ref:`installing the dependencies <install_deps>`, which include `Ghostscript <https://www.ghostscript.com>`_ and `Tkinter <https://wiki.python.org/moin/TkInter>`_, you can use one of the following methods to install Camelot:
.. _tk: https://packages.ubuntu.com/trusty/python-tk
.. _ghostscript: https://www.ghostscript.com/
.. warning:: The ``lattice`` flavor will fail to run if Ghostscript is not installed. You may run into errors as shown in `issue #193 <https://github.com/camelot-dev/camelot/issues/193>`_.
These can be installed using your system's package manager. You can run the following based on your OS.
pip
---
For Ubuntu::
To install Camelot from PyPI using ``pip``, please include the extra ``cv`` requirement as shown::
$ apt install python-tk ghostscript
$ pip install "camelot-py[base]"
For macOS::
conda
-----
$ brew install tcl-tk ghostscript
`conda`_ is a package manager and environment management system for the `Anaconda <https://anaconda.org>`_ distribution. It can be used to install Camelot from the ``conda-forge`` channel::
$ pip install camelot-py
------------------------
$ conda install -c conda-forge camelot-py
After installing the dependencies, you can simply use pip to install Camelot::
From the source code
--------------------
$ pip install camelot-py
After :ref:`installing the dependencies <install_deps>`, you can install Camelot from source by:
Get the Source Code
-------------------
Alternatively, you can install from source by:
1. Cloning the GitHub repository.
::
$ git clone https://www.github.com/camelot-dev/camelot
$ git clone https://www.github.com/socialcopsdev/camelot
2. And then simply using pip again.
::
$ cd camelot
$ pip install ".[base]"
$ pip install .

View File

@ -6,20 +6,20 @@ Introduction
The Camelot Project
-------------------
The PDF (Portable Document Format) was born out of `The Camelot Project`_ to create "a universal way to communicate documents across a wide variety of machine configurations, operating systems and communication networks". The goal was to make these documents viewable on any display and printable on any modern printers. The invention of the `PostScript`_ page description language, which enabled the creation of *fixed-layout* flat documents (with text, fonts, graphics, images encapsulated), solved this problem.
The Portable Document Format (PDF) was born out of `The Camelot Project`_ when a need was felt for "a universal to communicate documents across a wide variety of machine configurations, operating systems and communication networks". The goal was to make these documents viewable on any display and printable on any modern printers. The invention of the `PostScript`_ page description language, which enabled the creation of *fixed-layout* flat documents (with text, fonts, graphics, images encapsulated), solved the problem.
At a high level, PostScript defines instructions, such as "place this character at this *x,y* coordinate on a plane". Spaces can be *simulated* by placing characters relatively far apart. Extending from that, tables can be *simulated* by placing characters (which constitute words) in two-dimensional grids. A PDF viewer just takes these instructions and draws everything for the user to view. Since a PDF is just characters on a plane, there is no table data structure that can be extracted and used for analysis!
At a very high level, PostScript defines instructions, such as, "place this character at this x,y coordinate on a plane". Spaces can be *simulated* by placing characters relatively far apart. Extending from that, tables can be *simulated* by placing characters (which constitute words) in two-dimensional grids. A PDF viewer just takes these instructions and draws everything for the user to view. Since it's just characters on a plane, there is no table data structure which can be extracted and used for analysis!
Sadly, a lot of today's open data is trapped in PDF tables.
Sadly, a lot of open data is given out as tables which are trapped inside PDF files.
.. _PostScript: http://www.planetpdf.com/planetpdf/pdfs/warnock_camelot.pdf
Why another PDF table extraction library?
Why another PDF Table Extraction library?
-----------------------------------------
There are both open (`Tabula`_, `pdf-table-extract`_) and closed-source (`smallpdf`_, `PDFTables`_) tools that are widely used to extract tables from PDF files. They either give a nice output or fail miserably. There is no in between. This is not helpful since everything in the real world, including PDF table extraction, is fuzzy. This leads to the creation of ad-hoc table extraction scripts for each type of PDF table.
There are both open (`Tabula`_, `pdf-table-extract`_) and closed-source (`smallpdf`_, `PDFTables`_) tools that are widely used, to extract tables from PDF files. They either give a nice output, or fail miserably. There is no in-between. This is not helpful, since everything in the real world, including PDF table extraction, is fuzzy, leading to creation of adhoc table extraction scripts for each different type of PDF that the user wants to parse.
Camelot was created to offer users complete control over table extraction. If you can't get your desired output with the default settings, you can tweak them and get the job done!
Camelot was created with the goal of offering its users complete control over table extraction. If the users are not able to get the desired output with the default configuration, they should be able to tweak it and get the job done!
Here is a `comparison`_ of Camelot's output with outputs from other open-source PDF parsing libraries and tools.
@ -27,18 +27,15 @@ Here is a `comparison`_ of Camelot's output with outputs from other open-source
.. _pdf-table-extract: https://github.com/ashima/pdf-table-extract
.. _PDFTables: https://pdftables.com/
.. _Smallpdf: https://smallpdf.com
.. _comparison: https://github.com/camelot-dev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools
.. _comparison: https://github.com/socialcopsdev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools
What's in a name?
-----------------
As you can already guess, this library is named after `The Camelot Project`_.
Fun fact: In the British comedy film `Monty Python and the Holy Grail`_ (and in the `Arthurian legend`_ depicted in the film), "Camelot" is the name of the castle where Arthur leads his men, the Knights of the Round Table, and then sets off elsewhere after deciding that it is "a silly place". Interestingly, the language in which this library is written (Python) was named after Monty Python.
As you can already guess, this library is named after `The Camelot Project`_. Fun fact, "Camelot" is the name of the castle in `Monty Python and the Holy Grail`_, where Arthur leads his men, the Knights of the Round Table, and then sets off elsewhere after deciding that it is "a silly place". Interestingly, the language in which this library is written (Python) was named after Monty Python.
.. _The Camelot Project: http://www.planetpdf.com/planetpdf/pdfs/warnock_camelot.pdf
.. _Monty Python and the Holy Grail: https://en.wikipedia.org/wiki/Monty_Python_and_the_Holy_Grail
.. _Arthurian legend: https://en.wikipedia.org/wiki/King_Arthur
Camelot License
---------------

View File

@ -3,7 +3,7 @@
Quickstart
==========
In a hurry to extract tables from PDFs? This document gives a good introduction to help you get started with Camelot.
In a hurry to extract tables from PDFs? This document gives a good introduction to help you get started with using Camelot.
Read the PDF
------------
@ -14,7 +14,7 @@ Begin by importing the Camelot module::
>>> import camelot
Now, let's try to read a PDF. (You can check out the PDF used in this example `here`_.) Since the PDF has a table with clearly demarcated lines, we will use the :ref:`Lattice <lattice>` method here.
Now, let's try to read a PDF. You can check out the PDF used in this example, `here`_. Since the PDF has a table with clearly demarcated lines, we will use the :ref:`Lattice <lattice>` method here. To do that we will set the ``mesh`` keyword argument to ``True``.
.. note:: :ref:`Lattice <lattice>` is used by default. You can use :ref:`Stream <stream>` with ``flavor='stream'``.
@ -47,7 +47,7 @@ Let's print the parsing report.
'page': 1
}
Woah! The accuracy is top-notch and there is less whitespace, which means the table was most likely extracted correctly. You can access the table as a pandas DataFrame by using the :class:`table <camelot.core.Table>` object's ``df`` property.
Woah! The accuracy is top-notch and whitespace is less, that means the table was extracted correctly (most probably). You can access the table as a pandas DataFrame by using the :class:`table <camelot.core.Table>` object's ``df`` property.
::
@ -56,7 +56,7 @@ Woah! The accuracy is top-notch and there is less whitespace, which means the ta
.. csv-table::
:file: ../_static/csv/foo.csv
Looks good! You can now export the table as a CSV file using its :meth:`to_csv() <camelot.core.Table.to_csv>` method. Alternatively you can use :meth:`to_json() <camelot.core.Table.to_json>`, :meth:`to_excel() <camelot.core.Table.to_excel>` :meth:`to_html() <camelot.core.Table.to_html>` :meth:`to_markdown() <camelot.core.Table.to_markdown>` or :meth:`to_sqlite() <camelot.core.Table.to_sqlite>` methods to export the table as JSON, Excel, HTML files or a sqlite database respectively.
Looks good! You can be export the table as a CSV file using its :meth:`to_csv() <camelot.core.Table.to_csv>` method. Alternatively you can use :meth:`to_json() <camelot.core.Table.to_json>`, :meth:`to_excel() <camelot.core.Table.to_excel>` or :meth:`to_html() <camelot.core.Table.to_html>` methods to export the table as JSON, Excel and HTML files respectively.
::
@ -70,13 +70,7 @@ You can also export all tables at once, using the :class:`tables <camelot.core.T
>>> tables.export('foo.csv', f='csv')
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot --format csv --output foo.csv lattice foo.pdf
This will export all tables as CSV files at the path specified. Alternatively, you can use ``f='json'``, ``f='excel'``, ``f='html'``, ``f='markdown'`` or ``f='sqlite'``.
This will export all tables as CSV files at the path specified. Alternatively, you can use ``f='json'``, ``f='excel'`` or ``f='html'``.
.. note:: The :meth:`export() <camelot.core.TableList.export>` method exports files with a ``page-*-table-*`` suffix. In the example above, the single table in the list will be exported to ``foo-page-1-table-1.csv``. If the list contains multiple tables, multiple CSV files will be created. To avoid filling up your path with multiple files, you can use ``compress=True``, which will create a single ZIP file at your path with all the CSV files.
@ -91,42 +85,8 @@ By default, Camelot only uses the first page of the PDF to extract tables. To sp
>>> camelot.read_pdf('your.pdf', pages='1,2,3')
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
The ``pages`` keyword argument accepts pages as comma-separated string of page numbers. You can also specify page ranges, for example ``pages=1,4-10,20-30`` or ``pages=1,4-10,20-end``.
$ camelot --pages 1,2,3 lattice your.pdf
------------------------
The ``pages`` keyword argument accepts pages as comma-separated string of page numbers. You can also specify page ranges — for example, ``pages=1,4-10,20-30`` or ``pages=1,4-10,20-end``.
Reading encrypted PDFs
----------------------
To extract tables from encrypted PDF files you must provide a password when calling :meth:`read_pdf() <camelot.read_pdf>`.
::
>>> tables = camelot.read_pdf('foo.pdf', password='userpass')
>>> tables
<TableList n=1>
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot --password userpass lattice foo.pdf
Currently Camelot only supports PDFs encrypted with ASCII passwords and algorithm `code 1 or 2`_. An exception is thrown if the PDF cannot be read. This may be due to no password being provided, an incorrect password, or an unsupported encryption algorithm.
Further encryption support may be added in future, however in the meantime if your PDF files are using unsupported encryption algorithms you are advised to remove encryption before calling :meth:`read_pdf() <camelot.read_pdf>`. This can been successfully achieved with third-party tools such as `QPDF`_.
::
$ qpdf --password=<PASSWORD> --decrypt input.pdf output.pdf
.. _code 1 or 2: https://github.com/mstamy2/PyPDF2/issues/378
.. _QPDF: https://www.github.com/qpdf/qpdf
----
Ready for more? Check out the :ref:`advanced <advanced>` section.
Ready for more? Check out the :ref:`advanced <advanced>` section.

View File

@ -0,0 +1,3 @@
pytest==3.8.0
pytest-runner==4.2
Sphinx==1.7.9

7
requirements.txt 100644
View File

@ -0,0 +1,7 @@
click==6.7
matplotlib==2.2.3
numpy==1.13.3
opencv-python==3.4.2.17
pandas==0.23.4
pdfminer==20140328
PyPDF2==1.26.0

View File

@ -2,5 +2,5 @@
test=pytest
[tool:pytest]
addopts = --verbose --cov-config .coveragerc --cov-report term --cov-report xml --cov=camelot --mpl
addopts = --verbose
python_files = tests/test_*.py

109
setup.py
View File

@ -2,90 +2,61 @@
import os
from setuptools import find_packages
from pkg_resources import parse_version
here = os.path.abspath(os.path.dirname(__file__))
about = {}
with open(os.path.join(here, "camelot", "__version__.py"), "r") as f:
with open(os.path.join(here, 'camelot', '__version__.py'), 'r') as f:
exec(f.read(), about)
with open("README.md", "r") as f:
with open('README.md', 'r') as f:
readme = f.read()
requires = [
"chardet>=3.0.4",
"click>=6.7",
"numpy>=1.13.3",
"openpyxl>=2.5.8",
"pandas>=0.23.4",
"pdfminer.six>=20200726",
"PyPDF2>=1.26.0",
"tabulate>=0.8.9",
]
base_requires = ["ghostscript>=0.7", "opencv-python>=3.4.2.17", "pdftopng>=0.2.3"]
plot_requires = [
"matplotlib>=2.2.3",
]
dev_requires = [
"codecov>=2.0.15",
"pytest>=5.4.3",
"pytest-cov>=2.10.0",
"pytest-mpl>=0.11",
"pytest-runner>=5.2",
"Sphinx>=3.1.2",
"sphinx-autobuild>=2021.3.14",
]
all_requires = base_requires + plot_requires
dev_requires = dev_requires + all_requires
def setup_package():
metadata = dict(
name=about["__title__"],
version=about["__version__"],
description=about["__description__"],
long_description=readme,
long_description_content_type="text/markdown",
url=about["__url__"],
author=about["__author__"],
author_email=about["__author_email__"],
license=about["__license__"],
packages=find_packages(exclude=("tests",)),
install_requires=requires,
extras_require={
"all": all_requires,
"base": base_requires,
"cv": base_requires, # deprecate
"dev": dev_requires,
"plot": plot_requires,
},
entry_points={
"console_scripts": [
"camelot = camelot.cli:cli",
],
},
classifiers=[
# Trove classifiers
# Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
],
)
reqs = []
with open('requirements.txt', 'r') as f:
for line in f:
reqs.append(line.strip())
dev_reqs = []
with open('requirements-dev.txt', 'r') as f:
for line in f:
dev_reqs.append(line.strip())
metadata = dict(name=about['__title__'],
version=about['__version__'],
description=about['__description__'],
long_description=readme,
url=about['__url__'],
author=about['__author__'],
author_email=about['__author_email__'],
license=about['__license__'],
packages=find_packages(exclude=('tests',)),
install_requires=reqs,
extras_require={
'dev': dev_reqs
},
entry_points={
'console_scripts': [
'camelot = camelot.cli:cli',
],
},
classifiers=[
# Trove classifiers
# Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 2.7'
])
try:
from setuptools import setup
except ImportError:
except:
from distutils.core import setup
setup(**metadata)
if __name__ == "__main__":
setup_package()
if __name__ == '__main__':
setup_package()

View File

@ -1,3 +0,0 @@
import matplotlib
matplotlib.use("agg")

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1,2 +0,0 @@
"a","b"
"1","2"
1 a b
2 1 2

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1,191 +0,0 @@
# -*- coding: utf-8 -*-
import os
import sys
import pytest
from click.testing import CliRunner
from camelot.cli import cli
from camelot.utils import TemporaryDirectory
testdir = os.path.dirname(os.path.abspath(__file__))
testdir = os.path.join(testdir, "files")
skip_on_windows = pytest.mark.skipif(
sys.platform.startswith("win"),
reason="Ghostscript not installed in Windows test environment",
)
def test_help_output():
runner = CliRunner()
prog_name = runner.get_default_prog_name(cli)
result = runner.invoke(cli, ["--help"])
output = result.output
assert prog_name == "camelot"
assert result.output.startswith("Usage: %(prog_name)s [OPTIONS] COMMAND" % locals())
assert all(
v in result.output
for v in ["Options:", "--version", "--help", "Commands:", "lattice", "stream"]
)
@skip_on_windows
def test_cli_lattice():
with TemporaryDirectory() as tempdir:
infile = os.path.join(testdir, "foo.pdf")
outfile = os.path.join(tempdir, "foo.csv")
runner = CliRunner()
result = runner.invoke(
cli, ["--format", "csv", "--output", outfile, "lattice", infile]
)
assert result.exit_code == 0
assert "Found 1 tables" in result.output
result = runner.invoke(cli, ["--format", "csv", "lattice", infile])
output_error = "Error: Please specify output file path using --output"
assert output_error in result.output
result = runner.invoke(cli, ["--output", outfile, "lattice", infile])
format_error = "Please specify output file format using --format"
assert format_error in result.output
def test_cli_stream():
with TemporaryDirectory() as tempdir:
infile = os.path.join(testdir, "budget.pdf")
outfile = os.path.join(tempdir, "budget.csv")
runner = CliRunner()
result = runner.invoke(
cli, ["--format", "csv", "--output", outfile, "stream", infile]
)
assert result.exit_code == 0
assert result.output == "Found 1 tables\n"
result = runner.invoke(cli, ["--format", "csv", "stream", infile])
output_error = "Error: Please specify output file path using --output"
assert output_error in result.output
result = runner.invoke(cli, ["--output", outfile, "stream", infile])
format_error = "Please specify output file format using --format"
assert format_error in result.output
def test_cli_password():
with TemporaryDirectory() as tempdir:
infile = os.path.join(testdir, "health_protected.pdf")
outfile = os.path.join(tempdir, "health_protected.csv")
runner = CliRunner()
result = runner.invoke(
cli,
[
"--password",
"userpass",
"--format",
"csv",
"--output",
outfile,
"stream",
infile,
],
)
assert result.exit_code == 0
assert result.output == "Found 1 tables\n"
output_error = "file has not been decrypted"
# no password
result = runner.invoke(
cli, ["--format", "csv", "--output", outfile, "stream", infile]
)
assert output_error in str(result.exception)
# bad password
result = runner.invoke(
cli,
[
"--password",
"wrongpass",
"--format",
"csv",
"--output",
outfile,
"stream",
infile,
],
)
assert output_error in str(result.exception)
def test_cli_output_format():
with TemporaryDirectory() as tempdir:
infile = os.path.join(testdir, "health.pdf")
runner = CliRunner()
# json
outfile = os.path.join(tempdir, "health.json")
result = runner.invoke(
cli,
["--format", "json", "--output", outfile, "stream", infile],
)
assert result.exit_code == 0, f"Output: {result.output}"
# excel
outfile = os.path.join(tempdir, "health.xlsx")
result = runner.invoke(
cli,
["--format", "excel", "--output", outfile, "stream", infile],
)
assert result.exit_code == 0, f"Output: {result.output}"
# html
outfile = os.path.join(tempdir, "health.html")
result = runner.invoke(
cli,
["--format", "html", "--output", outfile, "stream", infile],
)
assert result.exit_code == 0, f"Output: {result.output}"
# markdown
outfile = os.path.join(tempdir, "health.md")
result = runner.invoke(
cli,
["--format", "markdown", "--output", outfile, "stream", infile],
)
assert result.exit_code == 0, f"Output: {result.output}"
# zip
outfile = os.path.join(tempdir, "health.csv")
result = runner.invoke(
cli,
[
"--zip",
"--format",
"csv",
"--output",
outfile,
"stream",
infile,
],
)
assert result.exit_code == 0, f"Output: {result.output}"
def test_cli_quiet():
with TemporaryDirectory() as tempdir:
infile = os.path.join(testdir, "empty.pdf")
outfile = os.path.join(tempdir, "empty.csv")
runner = CliRunner()
result = runner.invoke(
cli, ["--format", "csv", "--output", outfile, "stream", infile]
)
assert "No tables found on page-1" in result.output
result = runner.invoke(
cli, ["--quiet", "--format", "csv", "--output", outfile, "stream", infile]
)
assert "No tables found on page-1" not in result.output

View File

@ -1,174 +1,82 @@
# -*- coding: utf-8 -*-
import os
import sys
import pytest
import pandas as pd
from pandas.testing import assert_frame_equal
import camelot
from camelot.io import PDFHandler
from camelot.core import Table, TableList
from camelot.__version__ import generate_version
from camelot.backends import ImageConversionBackend
from .data import *
from test_data import *
testdir = os.path.dirname(os.path.abspath(__file__))
testdir = os.path.join(testdir, "files")
skip_on_windows = pytest.mark.skipif(
sys.platform.startswith("win"),
reason="Ghostscript not installed in Windows test environment",
)
def test_stream():
pass
def test_version_generation():
version = (0, 7, 3)
assert generate_version(version, prerelease=None, revision=None) == "0.7.3"
def test_stream_table_rotated():
df = pd.DataFrame(data_stream_table_rotated)
filename = os.path.join(testdir, "clockwise_table_2.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
assert df.equals(tables[0].df)
filename = os.path.join(testdir, "anticlockwise_table_2.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
assert df.equals(tables[0].df)
def test_version_generation_with_prerelease_revision():
version = (0, 7, 3)
prerelease = "alpha"
revision = 2
assert (
generate_version(version, prerelease=prerelease, revision=revision)
== "0.7.3-alpha.2"
)
def test_stream_table_area():
df = pd.DataFrame(data_stream_table_area_single)
filename = os.path.join(testdir, "tabula/us-007.pdf")
tables = camelot.read_pdf(filename, flavor="stream", table_area=["320,500,573,335"])
assert df.equals(tables[0].df)
@skip_on_windows
def test_parsing_report():
parsing_report = {"accuracy": 99.02, "whitespace": 12.24, "order": 1, "page": 1}
def test_stream_columns():
df = pd.DataFrame(data_stream_columns)
filename = os.path.join(testdir, "foo.pdf")
filename = os.path.join(testdir, "mexican_towns.pdf")
tables = camelot.read_pdf(
filename, flavor="stream", columns=["67,180,230,425,475"], row_close_tol=10)
assert df.equals(tables[0].df)
def test_lattice():
df = pd.DataFrame(data_lattice)
filename = os.path.join(testdir,
"tabula/icdar2013-dataset/competition-dataset-us/us-030.pdf")
tables = camelot.read_pdf(filename, pages="2")
assert df.equals(tables[0].df)
def test_lattice_table_rotated():
df = pd.DataFrame(data_lattice_table_rotated)
filename = os.path.join(testdir, "clockwise_table_1.pdf")
tables = camelot.read_pdf(filename)
assert tables[0].parsing_report == parsing_report
assert df.equals(tables[0].df)
filename = os.path.join(testdir, "anticlockwise_table_1.pdf")
tables = camelot.read_pdf(filename)
assert df.equals(tables[0].df)
def test_password():
df = pd.DataFrame(data_stream)
def test_lattice_process_background():
df = pd.DataFrame(data_lattice_process_background)
filename = os.path.join(testdir, "health_protected.pdf")
tables = camelot.read_pdf(filename, password="ownerpass", flavor="stream")
assert_frame_equal(df, tables[0].df)
tables = camelot.read_pdf(filename, password="userpass", flavor="stream")
assert_frame_equal(df, tables[0].df)
filename = os.path.join(testdir, "background_lines_1.pdf")
tables = camelot.read_pdf(filename, process_background=True)
assert df.equals(tables[1].df)
def test_repr_poppler():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="poppler")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=219 x2=165 y2=234>"
def test_lattice_copy_text():
df = pd.DataFrame(data_lattice_copy_text)
@skip_on_windows
def test_repr_ghostscript():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="ghostscript")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=218 x2=165 y2=234>"
def test_url_poppler():
url = "https://camelot-py.readthedocs.io/en/master/_static/pdf/foo.pdf"
tables = camelot.read_pdf(url, backend="poppler")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=219 x2=165 y2=234>"
@skip_on_windows
def test_url_ghostscript():
url = "https://camelot-py.readthedocs.io/en/master/_static/pdf/foo.pdf"
tables = camelot.read_pdf(url, backend="ghostscript")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=218 x2=165 y2=234>"
def test_pages_poppler():
url = "https://camelot-py.readthedocs.io/en/master/_static/pdf/foo.pdf"
tables = camelot.read_pdf(url, backend="poppler")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=219 x2=165 y2=234>"
tables = camelot.read_pdf(url, pages="1-end", backend="poppler")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=219 x2=165 y2=234>"
tables = camelot.read_pdf(url, pages="all", backend="poppler")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=219 x2=165 y2=234>"
@skip_on_windows
def test_pages_ghostscript():
url = "https://camelot-py.readthedocs.io/en/master/_static/pdf/foo.pdf"
tables = camelot.read_pdf(url, backend="ghostscript")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=218 x2=165 y2=234>"
tables = camelot.read_pdf(url, pages="1-end", backend="ghostscript")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=218 x2=165 y2=234>"
tables = camelot.read_pdf(url, pages="all", backend="ghostscript")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=218 x2=165 y2=234>"
def test_table_order():
def _make_table(page, order):
t = Table([], [])
t.page = page
t.order = order
return t
table_list = TableList(
[_make_table(2, 1), _make_table(1, 1), _make_table(3, 4), _make_table(1, 2)]
)
assert [(t.page, t.order) for t in sorted(table_list)] == [
(1, 1),
(1, 2),
(2, 1),
(3, 4),
]
assert [(t.page, t.order) for t in sorted(table_list, reverse=True)] == [
(3, 4),
(2, 1),
(1, 2),
(1, 1),
]
def test_handler_pages_generator():
filename = os.path.join(testdir, "foo.pdf")
handler = PDFHandler(filename)
assert handler._get_pages("1") == [1]
handler = PDFHandler(filename)
assert handler._get_pages("all") == [1]
handler = PDFHandler(filename)
assert handler._get_pages("1-end") == [1]
handler = PDFHandler(filename)
assert handler._get_pages("1,2,3,4") == [1, 2, 3, 4]
handler = PDFHandler(filename)
assert handler._get_pages("1,2,5-10") == [1, 2, 5, 6, 7, 8, 9, 10]
filename = os.path.join(testdir, "row_span_1.pdf")
tables = camelot.read_pdf(filename, line_size_scaling=60, copy_text="v")
assert df.equals(tables[0].df)

189
tests/test_data.py 100644
View File

@ -0,0 +1,189 @@
# -*- coding: utf-8 -*-
data_stream_table_rotated = [
["","","Table 21 Current use of contraception by background characteristics—Continued","","","","","","","","","","","","","","",""],
["","","","","","","Modern method","","","","","","","Traditional method","","","",""],
["","","","Any","","","","","","","Other","Any","","","","Not","","Number"],
["","","Any","modern","Female","Male","","","","Condom/","modern","traditional","","With-","Folk","currently","","of"],
["","Background characteristic","method","method","sterilization","sterilization","Pill","IUD","Injectables","Nirodh","method","method","Rhythm","drawal","method","using","Total","women"],
["","Caste/tribe","","","","","","","","","","","","","","","",""],
["","Scheduled caste","74.8","55.8","42.9","0.9","9.7","0.0","0.2","2.2","0.0","19.0","11.2","7.4","0.4","25.2","100.0","1,363"],
["","Scheduled tribe","59.3","39.0","26.8","0.6","6.4","0.6","1.2","3.5","0.0","20.3","10.4","5.8","4.1","40.7","100.0","256"],
["","Other backward class","71.4","51.1","34.9","0.0","8.6","1.4","0.0","6.2","0.0","20.4","12.6","7.8","0.0","28.6","100.0","211"],
["","Other","71.1","48.8","28.2","0.8","13.3","0.9","0.3","5.2","0.1","22.3","12.9","9.1","0.3","28.9","100.0","3,319"],
["","Wealth index","","","","","","","","","","","","","","","",""],
["","Lowest","64.5","48.6","34.3","0.5","10.5","0.6","0.7","2.0","0.0","15.9","9.9","4.6","1.4","35.5","100.0","1,258"],
["","Second","68.5","50.4","36.2","1.1","11.4","0.5","0.1","1.1","0.0","18.1","11.2","6.7","0.2","31.5","100.0","1,317"],
["","Middle","75.5","52.8","33.6","0.6","14.2","0.4","0.5","3.4","0.1","22.7","13.4","8.9","0.4","24.5","100.0","1,018"],
["","Fourth","73.9","52.3","32.0","0.5","12.5","0.6","0.2","6.3","0.2","21.6","11.5","9.9","0.2","26.1","100.0","908"],
["","Highest","78.3","44.4","19.5","1.0","9.7","1.4","0.0","12.7","0.0","33.8","18.2","15.6","0.0","21.7","100.0","733"],
["","Number of living children","","","","","","","","","","","","","","","",""],
["","No children","25.1","7.6","0.3","0.5","2.0","0.0","0.0","4.8","0.0","17.5","9.0","8.5","0.0","74.9","100.0","563"],
["","1 child","66.5","32.1","3.7","0.7","20.1","0.7","0.1","6.9","0.0","34.3","18.9","15.2","0.3","33.5","100.0","1,190"],
["","1 son","66.8","33.2","4.1","0.7","21.1","0.5","0.3","6.6","0.0","33.5","21.2","12.3","0.0","33.2","100.0","672"],
["","No sons","66.1","30.7","3.1","0.6","18.8","0.8","0.0","7.3","0.0","35.4","15.8","19.0","0.6","33.9","100.0","517"],
["","2 children","81.6","60.5","41.8","0.9","11.6","0.8","0.3","4.8","0.2","21.1","12.2","8.3","0.6","18.4","100.0","1,576"],
["","1 or more sons","83.7","64.2","46.4","0.9","10.8","0.8","0.4","4.8","0.1","19.5","11.1","7.6","0.7","16.3","100.0","1,268"],
["","No sons","73.2","45.5","23.2","1.0","15.1","0.9","0.0","4.8","0.5","27.7","16.8","11.0","0.0","26.8","100.0","308"],
["","3 children","83.9","71.2","57.7","0.8","9.8","0.6","0.5","1.8","0.0","12.7","8.7","3.3","0.8","16.1","100.0","961"],
["","1 or more sons","85.0","73.2","60.3","0.9","9.4","0.5","0.5","1.6","0.0","11.8","8.1","3.0","0.7","15.0","100.0","860"],
["","No sons","74.7","53.8","35.3","0.0","13.7","1.6","0.0","3.2","0.0","20.9","13.4","6.1","1.5","25.3","100.0","101"],
["","4+ children","74.3","58.1","45.1","0.6","8.7","0.6","0.7","2.4","0.0","16.1","9.9","5.4","0.8","25.7","100.0","944"],
["","1 or more sons","73.9","58.2","46.0","0.7","8.3","0.7","0.7","1.9","0.0","15.7","9.4","5.5","0.8","26.1","100.0","901"],
["","No sons","(82.1)","(57.3)","(25.6)","(0.0)","(17.8)","(0.0)","(0.0)","(13.9)","(0.0)","(24.8)","(21.3)","(3.5)","(0.0)","(17.9)","100.0","43"],
["","Total","71.2","49.9","32.2","0.7","11.7","0.6","0.3","4.3","0.1","21.3","12.3","8.4","0.5","28.8","100.0","5,234"],
["","NFHS-2 (1998-99)","66.6","47.3","32.0","1.8","9.2","1.4","na","2.9","na","na","8.7","9.8","na","33.4","100.0","4,116"],
["","NFHS-1 (1992-93)","57.7","37.6","26.5","4.3","3.6","1.3","0.1","1.9","na","na","11.3","8.3","na","42.3","100.0","3,970"],
["","","Note: If more than one method is used, only the most effective method is considered in this tabulation. Total includes women for whom caste/tribe was not known or is missing, who are","","","","","","","","","","","","","","",""],
["","not shown separately.","","","","","","","","","","","","","","","",""],
["","na = Not available","","","","","","","","","","","","","","","",""],
["","","ns = Not shown; see table 2b, footnote 1","","","","","","","","","","","","","","",""],
["","( ) Based on 25-49 unweighted cases.","","","","","","","","","","","","","","","",""],
["","","","","","","","","54","","","","","","","","",""]
]
data_stream_table_area_single = [
["","One Withholding"],
["Payroll Period","Allowance"],
["Weekly","$71.15"],
["Biweekly","142.31"],
["Semimonthly","154.17"],
["Monthly","308.33"],
["Quarterly","925.00"],
["Semiannually","1,850.00"],
["Annually","3,700.00"],
["Daily or Miscellaneous","14.23"],
["(each day of the payroll period)",""]
]
data_stream_columns = [
["Clave","Nombre Entidad","Clave","Nombre Municipio","Clave","Nombre Localidad"],
["Entidad","","Municipio","","Localidad",""],
["01","Aguascalientes","001","Aguascalientes","0094","Granja Adelita"],
["01","Aguascalientes","001","Aguascalientes","0096","Agua Azul"],
["01","Aguascalientes","001","Aguascalientes","0100","Rancho Alegre"],
["01","Aguascalientes","001","Aguascalientes","0102","Los Arbolitos [Rancho]"],
["01","Aguascalientes","001","Aguascalientes","0104","Ardillas de Abajo (Las Ardillas)"],
["01","Aguascalientes","001","Aguascalientes","0106","Arellano"],
["01","Aguascalientes","001","Aguascalientes","0112","Bajío los Vázquez"],
["01","Aguascalientes","001","Aguascalientes","0113","Bajío de Montoro"],
["01","Aguascalientes","001","Aguascalientes","0114","Residencial San Nicolás [Baños la Cantera]"],
["01","Aguascalientes","001","Aguascalientes","0120","Buenavista de Peñuelas"],
["01","Aguascalientes","001","Aguascalientes","0121","Cabecita 3 Marías (Rancho Nuevo)"],
["01","Aguascalientes","001","Aguascalientes","0125","Cañada Grande de Cotorina"],
["01","Aguascalientes","001","Aguascalientes","0126","Cañada Honda [Estación]"],
["01","Aguascalientes","001","Aguascalientes","0127","Los Caños"],
["01","Aguascalientes","001","Aguascalientes","0128","El Cariñán"],
["01","Aguascalientes","001","Aguascalientes","0129","El Carmen [Granja]"],
["01","Aguascalientes","001","Aguascalientes","0135","El Cedazo (Cedazo de San Antonio)"],
["01","Aguascalientes","001","Aguascalientes","0138","Centro de Arriba (El Taray)"],
["01","Aguascalientes","001","Aguascalientes","0139","Cieneguilla (La Lumbrera)"],
["01","Aguascalientes","001","Aguascalientes","0141","Cobos"],
["01","Aguascalientes","001","Aguascalientes","0144","El Colorado (El Soyatal)"],
["01","Aguascalientes","001","Aguascalientes","0146","El Conejal"],
["01","Aguascalientes","001","Aguascalientes","0157","Cotorina de Abajo"],
["01","Aguascalientes","001","Aguascalientes","0162","Coyotes"],
["01","Aguascalientes","001","Aguascalientes","0166","La Huerta (La Cruz)"],
["01","Aguascalientes","001","Aguascalientes","0170","Cuauhtémoc (Las Palomas)"],
["01","Aguascalientes","001","Aguascalientes","0171","Los Cuervos (Los Ojos de Agua)"],
["01","Aguascalientes","001","Aguascalientes","0172","San José [Granja]"],
["01","Aguascalientes","001","Aguascalientes","0176","La Chiripa"],
["01","Aguascalientes","001","Aguascalientes","0182","Dolores"],
["01","Aguascalientes","001","Aguascalientes","0183","Los Dolores"],
["01","Aguascalientes","001","Aguascalientes","0190","El Duraznillo"],
["01","Aguascalientes","001","Aguascalientes","0191","Los Durón"],
["01","Aguascalientes","001","Aguascalientes","0197","La Escondida"],
["01","Aguascalientes","001","Aguascalientes","0201","Brande Vin [Bodegas]"],
["01","Aguascalientes","001","Aguascalientes","0207","Valle Redondo"],
["01","Aguascalientes","001","Aguascalientes","0209","La Fortuna"],
["01","Aguascalientes","001","Aguascalientes","0212","Lomas del Gachupín"],
["01","Aguascalientes","001","Aguascalientes","0213","El Carmen (Gallinas Güeras) [Rancho]"],
["01","Aguascalientes","001","Aguascalientes","0216","La Gloria"],
["01","Aguascalientes","001","Aguascalientes","0226","Hacienda Nueva"],
]
data_lattice = [
["Cycle Name","KI (1/km)","Distance (mi)","Percent Fuel Savings","","",""],
["","","","Improved Speed","Decreased Accel","Eliminate Stops","Decreased Idle"],
["2012_2","3.30","1.3","5.9%","9.5%","29.2%","17.4%"],
["2145_1","0.68","11.2","2.4%","0.1%","9.5%","2.7%"],
["4234_1","0.59","58.7","8.5%","1.3%","8.5%","3.3%"],
["2032_2","0.17","57.8","21.7%","0.3%","2.7%","1.2%"],
["4171_1","0.07","173.9","58.1%","1.6%","2.1%","0.5%"]
]
data_lattice_table_rotated = [
["State","Nutritional Assessment (No. of individuals)","","","","IYCF Practices (No. of mothers: 2011-12)","Blood Pressure (No. of adults: 2011-12)","","Fasting Blood Sugar (No. of adults:2011-12)",""],
["","1975-79","1988-90","1996-97","2011-12","","Men","Women","Men","Women"],
["Kerala","5738","6633","8864","8297","245","2161","3195","1645","2391"],
["Tamil Nadu","7387","10217","5813","7851","413","2134","2858","1119","1739"],
["Karnataka","6453","8138","12606","8958","428","2467","2894","1628","2028"],
["Andhra Pradesh","5844","9920","9545","8300","557","1899","2493","1111","1529"],
["Maharashtra","5161","7796","6883","9525","467","2368","2648","1417","1599"],
["Gujarat","4403","5374","4866","9645","477","2687","3021","2122","2503"],
["Madhya Pradesh","*","*","*","7942","470","1965","2150","1579","1709"],
["Orissa","3756","5540","12024","8473","398","2040","2624","1093","1628"],
["West Bengal","*","*","*","8047","423","2058","2743","1413","2027"],
["Uttar Pradesh","*","*","*","9860","581","2139","2415","1185","1366"],
["Pooled","38742","53618","60601","86898","4459","21918","27041","14312","18519"]
]
data_lattice_process_background = [
["State","Date","Halt stations","Halt days","Persons directly reached(in lakh)","Persons trained","Persons counseled","Persons testedfor HIV"],
["Delhi","1.12.2009","8","17","1.29","3,665","2,409","1,000"],
["Rajasthan","2.12.2009 to 19.12.2009","","","","","",""],
["Gujarat","20.12.2009 to 3.1.2010","6","13","6.03","3,810","2,317","1,453"],
["Maharashtra","4.01.2010 to 1.2.2010","13","26","1.27","5,680","9,027","4,153"],
["Karnataka","2.2.2010 to 22.2.2010","11","19","1.80","5,741","3,658","3,183"],
["Kerala","23.2.2010 to 11.3.2010","9","17","1.42","3,559","2,173","855"],
["Total","","47","92","11.81","22,455","19,584","10,644"]
]
data_lattice_copy_text = [
["Plan Type","County","Plan Name","Totals"],
["GMC","Sacramento","Anthem Blue Cross","164,380"],
["GMC","Sacramento","Health Net","126,547"],
["GMC","Sacramento","Kaiser Foundation","74,620"],
["GMC","Sacramento","Molina Healthcare","59,989"],
["GMC","San Diego","Care 1st Health Plan","71,831"],
["GMC","San Diego","Community Health Group","264,639"],
["GMC","San Diego","Health Net","72,404"],
["GMC","San Diego","Kaiser","50,415"],
["GMC","San Diego","Molina Healthcare","206,430"],
["GMC","Total GMC Enrollment","","1,091,255"],
["COHS","Marin","Partnership Health Plan of CA","36,006"],
["COHS","Mendocino","Partnership Health Plan of CA","37,243"],
["COHS","Napa","Partnership Health Plan of CA","28,398"],
["COHS","Solano","Partnership Health Plan of CA","113,220"],
["COHS","Sonoma","Partnership Health Plan of CA","112,271"],
["COHS","Yolo","Partnership Health Plan of CA","52,674"],
["COHS","Del Norte","Partnership Health Plan of CA","11,242"],
["COHS","Humboldt","Partnership Health Plan of CA","49,911"],
["COHS","Lake","Partnership Health Plan of CA","29,149"],
["COHS","Lassen","Partnership Health Plan of CA","7,360"],
["COHS","Modoc","Partnership Health Plan of CA","2,940"],
["COHS","Shasta","Partnership Health Plan of CA","61,763"],
["COHS","Siskiyou","Partnership Health Plan of CA","16,715"],
["COHS","Trinity","Partnership Health Plan of CA","4,542"],
["COHS","Merced","Central California Alliance for Health","123,907"],
["COHS","Monterey","Central California Alliance for Health","147,397"],
["COHS","Santa Cruz","Central California Alliance for Health","69,458"],
["COHS","Santa Barbara","CenCal","117,609"],
["COHS","San Luis Obispo","CenCal","55,761"],
["COHS","Orange","CalOptima","783,079"],
["COHS","San Mateo","Health Plan of San Mateo","113,202"],
["COHS","Ventura","Gold Coast Health Plan","202,217"],
["COHS","Total COHS Enrollment","","2,176,064"],
["Subtotal for Two-Plan, Regional Model, GMC and COHS","","","10,132,022"],
["PCCM","Los Angeles","AIDS Healthcare Foundation","828"],
["PCCM","San Francisco","Family Mosaic","25"],
["PCCM","Total PHP Enrollment","","853"],
["All Models Total Enrollments","","","10,132,875"],
["Source: Data Warehouse 12/14/15","","",""]
]

Some files were not shown because too many files have changed in this diff Show More