Compare commits

..

No commits in common. "master" and "v0.6.0" have entirely different histories.

74 changed files with 1683 additions and 5736 deletions

10
.editorconfig 100644
View File

@ -0,0 +1,10 @@
root = true
[*]
end_of_line = lf
insert_final_newline = true
[*.py]
charset = utf-8
indent_style = space
indent_size = 4

1
.github/FUNDING.yml vendored
View File

@ -1 +0,0 @@
open_collective: camelot

View File

@ -1,57 +0,0 @@
---
name: Bug report
about: Please follow this template to submit bug reports.
title: ''
labels: bug
assignees: ''
---
<!-- Please read the filing issues section of the contributor's guide first: https://camelot-py.readthedocs.io/en/master/dev/contributing.html -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**Steps to reproduce the bug**
<!-- Steps used to install `camelot`:
1. Add step here (you can add more steps too) -->
<!-- Steps to be used to reproduce behavior:
1. Add step here (you can add more steps too) -->
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Code**
<!-- Add the Camelot code snippet that you used. -->
```
import camelot
# add your code here
```
**PDF**
<!-- Add the PDF file that you want to extract tables from. -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
**Environment**
- OS: [e.g. macOS]
- Python version:
- Numpy version:
- OpenCV version:
- Ghostscript version:
- Camelot version:
**Additional context**
<!-- Add any other context about the problem here. -->

View File

@ -1,44 +0,0 @@
name: tests
on: [pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install camelot with dependencies
run: |
make install
- name: Test with pytest
run: |
make test
test_latest:
name: Test on ${{ matrix.os }} with Python 3.9
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.9]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install camelot with dependencies
run: |
make install
- name: Test with pytest
run: |
make test

4
.gitignore vendored
View File

@ -1,4 +1,3 @@
fontconfig/
__pycache__/ __pycache__/
*.py[cod] *.py[cod]
*.so *.so
@ -13,8 +12,5 @@ coverage.xml
.pytest_cache/ .pytest_cache/
_build/ _build/
.venv/
htmlcov/
# vscode # vscode
.vscode .vscode

View File

@ -1,27 +0,0 @@
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# Build documentation with MkDocs
#mkdocs:
# configuration: mkdocs.yml
# Optionally build your docs in additional formats such as PDF
formats:
- pdf
# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.8
install:
- method: pip
path: .
extra_requirements:
- dev

32
.travis.yml 100755
View File

@ -0,0 +1,32 @@
sudo: true
language: python
cache: pip
addons:
apt:
update: true
install:
- make install
jobs:
include:
- stage: test
script:
- make test
python: '2.7'
- stage: test
script:
- make test
python: '3.5'
- stage: test
script:
- make test
python: '3.6'
- stage: test
script:
- make test
python: '3.7'
dist: xenial
- stage: coverage
python: '3.6'
script:
- make test
- codecov --verbose

View File

@ -16,14 +16,14 @@ As the [Requests Code Of Conduct](http://docs.python-requests.org/en/master/dev/
## Your first contribution ## Your first contribution
A great way to start contributing to Camelot is to pick an issue tagged with the [help wanted](https://github.com/camelot-dev/camelot/labels/help%20wanted) tag or the [good first issue](https://github.com/camelot-dev/camelot/labels/good%20first%20issue) tag. If you're unable to find a good first issue, feel free to contact the maintainer. A great way to start contributing to Camelot is to pick an issue tagged with the [help wanted](https://github.com/socialcopsdev/camelot/labels/help%20wanted) tag or the [good first issue](https://github.com/socialcopsdev/camelot/labels/good%20first%20issue) tag. If you're unable to find a good first issue, feel free to contact the maintainer.
## Setting up a development environment ## Setting up a development environment
To install the dependencies needed for development, you can use pip: To install the dependencies needed for development, you can use pip:
<pre> <pre>
$ pip install "camelot-py[dev]" $ pip install camelot-py[dev]
</pre> </pre>
Alternatively, you can clone the project repository, and install using pip: Alternatively, you can clone the project repository, and install using pip:
@ -36,7 +36,7 @@ $ pip install ".[dev]"
### Submit a pull request ### Submit a pull request
The preferred workflow for contributing to Camelot is to fork the [project repository](https://github.com/camelot-dev/camelot) on GitHub, clone, develop on a branch and then finally submit a pull request. Here are the steps: The preferred workflow for contributing to Camelot is to fork the [project repository](https://github.com/socialcopsdev/camelot) on GitHub, clone, develop on a branch and then finally submit a pull request. Here are the steps:
1. Fork the project repository. Click on the Fork button near the top of the page. This creates a copy of the code under your account on the GitHub. 1. Fork the project repository. Click on the Fork button near the top of the page. This creates a copy of the code under your account on the GitHub.
@ -106,7 +106,7 @@ The function docstrings are written using the [numpydoc](https://numpydoc.readth
## Filing Issues ## Filing Issues
We use [GitHub issues](https://github.com/camelot-dev/camelot/issues) to keep track of all issues and pull requests. Before opening an issue (which asks a question or reports a bug), please use GitHub search to look for existing issues (both open and closed) that may be similar. We use [GitHub issues](https://github.com/socialcopsdev/camelot/issues) to keep track of all issues and pull requests. Before opening an issue (which asks a question or reports a bug), please use GitHub search to look for existing issues (both open and closed) that may be similar.
### Questions ### Questions

View File

@ -4,126 +4,6 @@ Release History
master master
------ ------
0.10.1 (2021-07-11)
------------------
- Change extra requirements from `cv` to `base`. You can use `pip install "camelot-py[base]"` to install everything required to run camelot.
0.10.0 (2021-07-11)
------------------
**Improvements**
- Add support for multiple image conversion backends. [#198](https://github.com/camelot-dev/camelot/pull/198) and [#253](https://github.com/camelot-dev/camelot/pull/253) by Vinayak Mehta.
- Add markdown export format. [#222](https://github.com/camelot-dev/camelot/pull/222/) by [Lucas Cimon](https://github.com/Lucas-C).
**Documentation**
- Add faq section. [#216](https://github.com/camelot-dev/camelot/pull/216) by [Stefano Fiorucci](https://github.com/anakin87).
0.9.0 (2021-06-15)
------------------
**Bugfixes**
- Fix use of resolution argument to generate image with ghostscript. [#231](https://github.com/camelot-dev/camelot/pull/231) by [Tiago Samaha Cordeiro](https://github.com/tiagosamaha).
- [#15](https://github.com/camelot-dev/camelot/issues/15) Fix duplicate strings being assigned to the same cell. [#206](https://github.com/camelot-dev/camelot/pull/206) by [Eduardo Gonzalez Lopez de Murillas](https://github.com/edugonza).
- Save plot when filename is specified. [#121](https://github.com/camelot-dev/camelot/pull/121) by [Jens Diemer](https://github.com/jedie).
- Close file streams explicitly. [#202](https://github.com/camelot-dev/camelot/pull/202) by [Martin Abente Lahaye](https://github.com/tchx84).
- Use correct re.sub signature. [#186](https://github.com/camelot-dev/camelot/pull/186) by [pevisscher](https://github.com/pevisscher).
- [#183](https://github.com/camelot-dev/camelot/issues/183) Fix UnicodeEncodeError when using Stream flavor by adding encoding kwarg to `to_html`. [#188](https://github.com/camelot-dev/camelot/pull/188) by [Stefano Fiorucci](https://github.com/anakin87).
- [#179](https://github.com/camelot-dev/camelot/issues/179) Fix `max() arg is an empty sequence` error on PDFs with blank pages. [#189](https://github.com/camelot-dev/camelot/pull/189) by Vinayak Mehta.
**Improvements**
- Add `line_overlap` and `boxes_flow` to `LAParams`. [#219](https://github.com/camelot-dev/camelot/pull/219) by [Arnie97](https://github.com/Arnie97).
- [Add bug report template.](https://github.com/camelot-dev/camelot/commit/0a3944e54d133b701edfe9c7546ff11289301ba8)
- Move from [Travis to GitHub Actions](https://github.com/camelot-dev/camelot/pull/241).
- Update `.readthedocs.yml` and [remove requirements.txt](https://github.com/camelot-dev/camelot/commit/7ab5db39d07baa4063f975e9e00f6073340e04c1#diff-cde814ef2f549dc093f5b8fc533b7e8f47e7b32a8081e0760e57d5c25a1139d9)
**Documentation**
- [#193](https://github.com/camelot-dev/camelot/issues/193) Add better checks to confirm proper installation of ghostscript. [#196](https://github.com/camelot-dev/camelot/pull/196) by [jimhall](https://github.com/jimhall).
- Update `advanced.rst` plotting examples. [#119](https://github.com/camelot-dev/camelot/pull/119) by [Jens Diemer](https://github.com/jedie).
0.8.2 (2020-07-27)
------------------
* Revert the changes in `0.8.1`.
0.8.1 (2020-07-21)
------------------
**Bugfixes**
* [#169](https://github.com/camelot-dev/camelot/issues/169) Fix import error caused by `pdfminer.six==20200720`. [#171](https://github.com/camelot-dev/camelot/pull/171) by Vinayak Mehta.
0.8.0 (2020-05-24)
------------------
**Improvements**
* Drop Python 2 support!
* Remove Python 2.7 and 3.5 support.
* Replace all instances of `.format` with f-strings.
* Remove all `__future__` imports.
* Fix HTTP 403 forbidden exception in read_pdf(url) and remove Python 2 urllib support.
* Fix test data.
**Bugfixes**
* Fix library discovery on Windows. [#32](https://github.com/camelot-dev/camelot/pull/32) by [KOLANICH](https://github.com/KOLANICH).
* Fix calling convention of callback functions. [#34](https://github.com/camelot-dev/camelot/pull/34) by [KOLANICH](https://github.com/KOLANICH).
0.7.3 (2019-07-07)
------------------
**Improvements**
* Camelot now follows the Black code style! [#1](https://github.com/camelot-dev/camelot/pull/1) and [#3](https://github.com/camelot-dev/camelot/pull/3).
**Bugfixes**
* Fix Click.HelpFormatter monkey-patch. [#5](https://github.com/camelot-dev/camelot/pull/5) by [Dimiter Naydenov](https://github.com/dimitern).
* Fix strip_text argument getting ignored. [#4](https://github.com/camelot-dev/camelot/pull/4) by [Dimiter Naydenov](https://github.com/dimitern).
* [#25](https://github.com/camelot-dev/camelot/issues/25) edge_tol skipped in read_pdf. [#26](https://github.com/camelot-dev/camelot/pull/26) by Vinayak Mehta.
* Fix pytest deprecation warning. [#2](https://github.com/camelot-dev/camelot/pull/2) by Vinayak Mehta.
* [#293](https://github.com/socialcopsdev/camelot/issues/293) Split text ignores all text to the right of last cut. [#294](https://github.com/socialcopsdev/camelot/pull/294) by Vinayak Mehta.
* [#277](https://github.com/socialcopsdev/camelot/issues/277) Sort TableList by order of tables in PDF. [#283](https://github.com/socialcopsdev/camelot/pull/283) by [Sym Roe](https://github.com/symroe).
* [#312](https://github.com/socialcopsdev/camelot/issues/312) `table_regions` throws `ValueError` when `flavor='stream'`. [#332](https://github.com/socialcopsdev/camelot/pull/332) by Vinayak Mehta.
0.7.2 (2019-01-10)
------------------
**Bugfixes**
* [#245](https://github.com/socialcopsdev/camelot/issues/245) Fix AttributeError for encrypted files. [#251](https://github.com/socialcopsdev/camelot/pull/251) by Yatin Taluja.
0.7.1 (2019-01-06)
------------------
**Bugfixes**
* Move ghostscript import to inside the function so Anaconda builds don't fail.
0.7.0 (2019-01-05)
------------------
**Improvements**
* [#240](https://github.com/socialcopsdev/camelot/issues/209) Add support to analyze only certain page regions to look for tables. [#243](https://github.com/socialcopsdev/camelot/pull/243) by Vinayak Mehta.
* You can use `table_regions` in `read_pdf()` to specify approximate page regions which may contain tables.
* Kwarg `line_size_scaling` is now called `line_scale`.
* [#212](https://github.com/socialcopsdev/camelot/issues/212) Add support to export as sqlite database. [#244](https://github.com/socialcopsdev/camelot/pull/244) by Vinayak Mehta.
* [#239](https://github.com/socialcopsdev/camelot/issues/239) Raise warning if PDF is image-based. [#240](https://github.com/socialcopsdev/camelot/pull/240) by Vinayak Mehta.
**Documentation**
* Remove mention of old mesh kwarg from docs. [#241](https://github.com/socialcopsdev/camelot/pull/241) by [fte10kso](https://github.com/fte10kso).
**Note**: The python wrapper to Ghostscript's C API is now vendorized under the `ext` module. This was done due to unavailability of the [ghostscript](https://pypi.org/project/ghostscript/) package on Anaconda. The code should be removed after we submit a recipe for it to conda-forge. With this release, the user doesn't need to ensure that the Ghostscript executable is available on the PATH variable.
0.6.0 (2018-12-24) 0.6.0 (2018-12-24)
------------------ ------------------

View File

@ -1,7 +1,6 @@
MIT License MIT License
Copyright (c) 2019-2021 Camelot Developers Copyright (c) 2018 Peeply Private Ltd (Singapore)
Copyright (c) 2018-2019 Peeply Private Ltd (Singapore)
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@ -1,28 +1,27 @@
<p align="center"> <p align="center">
<img src="https://raw.githubusercontent.com/camelot-dev/camelot/master/docs/_static/camelot.png" width="200"> <img src="https://raw.githubusercontent.com/socialcopsdev/camelot/master/docs/_static/camelot.png" width="200">
</p> </p>
# Camelot: PDF Table Extraction for Humans # Camelot: PDF Table Extraction for Humans
[![tests](https://github.com/camelot-dev/camelot/actions/workflows/tests.yml/badge.svg)](https://github.com/camelot-dev/camelot/actions/workflows/tests.yml) [![Documentation Status](https://readthedocs.org/projects/camelot-py/badge/?version=master)](https://camelot-py.readthedocs.io/en/master/) [![Build Status](https://travis-ci.org/socialcopsdev/camelot.svg?branch=master)](https://travis-ci.org/socialcopsdev/camelot) [![Documentation Status](https://readthedocs.org/projects/camelot-py/badge/?version=master)](https://camelot-py.readthedocs.io/en/master/)
[![codecov.io](https://codecov.io/github/camelot-dev/camelot/badge.svg?branch=master&service=github)](https://codecov.io/github/camelot-dev/camelot?branch=master) [![codecov.io](https://codecov.io/github/socialcopsdev/camelot/badge.svg?branch=master&service=github)](https://codecov.io/github/socialcopsdev/camelot?branch=master)
[![image](https://img.shields.io/pypi/v/camelot-py.svg)](https://pypi.org/project/camelot-py/) [![image](https://img.shields.io/pypi/l/camelot-py.svg)](https://pypi.org/project/camelot-py/) [![image](https://img.shields.io/pypi/pyversions/camelot-py.svg)](https://pypi.org/project/camelot-py/) [![Gitter chat](https://badges.gitter.im/camelot-dev/Lobby.png)](https://gitter.im/camelot-dev/Lobby) [![image](https://img.shields.io/pypi/v/camelot-py.svg)](https://pypi.org/project/camelot-py/) [![image](https://img.shields.io/pypi/l/camelot-py.svg)](https://pypi.org/project/camelot-py/) [![image](https://img.shields.io/pypi/pyversions/camelot-py.svg)](https://pypi.org/project/camelot-py/) [![Gitter chat](https://badges.gitter.im/camelot-dev/Lobby.png)](https://gitter.im/camelot-dev/Lobby)
[![image](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black)
**Camelot** is a Python library that can help you extract tables from PDFs! **Camelot** is a Python library that makes it easy for *anyone* to extract tables from PDF files!
**Note:** You can also check out [Excalibur](https://github.com/camelot-dev/excalibur), the web interface to Camelot! **Note:** You can also check out [Excalibur](https://github.com/camelot-dev/excalibur), which is a web interface for Camelot!
--- ---
**Here's how you can extract tables from PDFs.** You can check out the PDF used in this example [here](https://github.com/camelot-dev/camelot/blob/master/docs/_static/pdf/foo.pdf). **Here's how you can extract tables from PDF files.** Check out the PDF used in this example [here](https://github.com/socialcopsdev/camelot/blob/master/docs/_static/pdf/foo.pdf).
<pre> <pre>
>>> import camelot >>> import camelot
>>> tables = camelot.read_pdf('foo.pdf') >>> tables = camelot.read_pdf('foo.pdf')
>>> tables >>> tables
&lt;TableList n=1&gt; &lt;TableList n=1&gt;
>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html, markdown, sqlite >>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html
>>> tables[0] >>> tables[0]
&lt;Table shape=(7, 7)&gt; &lt;Table shape=(7, 7)&gt;
>>> tables[0].parsing_report >>> tables[0].parsing_report
@ -32,7 +31,7 @@
'order': 1, 'order': 1,
'page': 1 'page': 1
} }
>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html, to_markdown, to_sqlite >>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html
>>> tables[0].df # get a pandas DataFrame! >>> tables[0].df # get a pandas DataFrame!
</pre> </pre>
@ -45,29 +44,24 @@
| 2032_2 | 0.17 | 57.8 | 21.7% | 0.3% | 2.7% | 1.2% | | 2032_2 | 0.17 | 57.8 | 21.7% | 0.3% | 2.7% | 1.2% |
| 4171_1 | 0.07 | 173.9 | 58.1% | 1.6% | 2.1% | 0.5% | | 4171_1 | 0.07 | 173.9 | 58.1% | 1.6% | 2.1% | 0.5% |
Camelot also comes packaged with a [command-line interface](https://camelot-py.readthedocs.io/en/master/user/cli.html)! There's a [command-line interface](https://camelot-py.readthedocs.io/en/master/user/cli.html) too!
**Note:** Camelot only works with text-based PDFs and not scanned documents. (As Tabula [explains](https://github.com/tabulapdf/tabula#why-tabula), "If you can click and drag to select text in your table in a PDF viewer, then your PDF is text-based".) **Note:** Camelot only works with text-based PDFs and not scanned documents. (As Tabula [explains](https://github.com/tabulapdf/tabula#why-tabula), "If you can click and drag to select text in your table in a PDF viewer, then your PDF is text-based".)
You can check out some frequently asked questions [here](https://camelot-py.readthedocs.io/en/master/user/faq.html).
## Why Camelot? ## Why Camelot?
- **Configurability**: Camelot gives you control over the table extraction process with [tweakable settings](https://camelot-py.readthedocs.io/en/master/user/advanced.html). - **You are in control.**: Unlike other libraries and tools which either give a nice output or fail miserably (with no in-between), Camelot gives you the power to tweak table extraction. (This is important since everything in the real world, including PDF table extraction, is fuzzy.)
- **Metrics**: You can discard bad tables based on metrics like accuracy and whitespace, without having to manually look at each table. - *Bad* tables can be discarded based on **metrics** like accuracy and whitespace, without ever having to manually look at each table.
- **Output**: Each table is extracted into a **pandas DataFrame**, which seamlessly integrates into [ETL and data analysis workflows](https://gist.github.com/vinayak-mehta/e5949f7c2410a0e12f25d3682dc9e873). You can also export tables to multiple formats, which include CSV, JSON, Excel, HTML, Markdown, and Sqlite. - Each table is a **pandas DataFrame**, which seamlessly integrates into [ETL and data analysis workflows](https://gist.github.com/vinayak-mehta/e5949f7c2410a0e12f25d3682dc9e873).
- **Export** to multiple formats, including JSON, Excel and HTML.
See [comparison with similar libraries and tools](https://github.com/camelot-dev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools). See [comparison with other PDF table extraction libraries and tools](https://github.com/socialcopsdev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools).
## Support the development
If Camelot has helped you, please consider supporting its development with a one-time or monthly donation [on OpenCollective](https://opencollective.com/camelot).
## Installation ## Installation
### Using conda ### Using conda
The easiest way to install Camelot is with [conda](https://conda.io/docs/), which is a package manager and environment management system for the [Anaconda](http://docs.continuum.io/anaconda/) distribution. The easiest way to install Camelot is to install it with [conda](https://conda.io/docs/), which is a package manager and environment management system for the [Anaconda](http://docs.continuum.io/anaconda/) distribution.
<pre> <pre>
$ conda install -c conda-forge camelot-py $ conda install -c conda-forge camelot-py
@ -75,10 +69,10 @@ $ conda install -c conda-forge camelot-py
### Using pip ### Using pip
After [installing the dependencies](https://camelot-py.readthedocs.io/en/master/user/install-deps.html) ([tk](https://packages.ubuntu.com/bionic/python/python-tk) and [ghostscript](https://www.ghostscript.com/)), you can also just use pip to install Camelot: After [installing the dependencies](https://camelot-py.readthedocs.io/en/master/user/install-deps.html) ([tk](https://packages.ubuntu.com/trusty/python-tk) and [ghostscript](https://www.ghostscript.com/)), you can simply use pip to install Camelot:
<pre> <pre>
$ pip install "camelot-py[base]" $ pip install camelot-py[cv]
</pre> </pre>
### From the source code ### From the source code
@ -86,32 +80,52 @@ $ pip install "camelot-py[base]"
After [installing the dependencies](https://camelot-py.readthedocs.io/en/master/user/install.html#using-pip), clone the repo using: After [installing the dependencies](https://camelot-py.readthedocs.io/en/master/user/install.html#using-pip), clone the repo using:
<pre> <pre>
$ git clone https://www.github.com/camelot-dev/camelot $ git clone https://www.github.com/socialcopsdev/camelot
</pre> </pre>
and install Camelot using pip: and install Camelot using pip:
<pre> <pre>
$ cd camelot $ cd camelot
$ pip install ".[base]" $ pip install ".[cv]"
</pre> </pre>
## Documentation ## Documentation
The documentation is available at [http://camelot-py.readthedocs.io/](http://camelot-py.readthedocs.io/). Great documentation is available at [http://camelot-py.readthedocs.io/](http://camelot-py.readthedocs.io/).
## Wrappers ## Development
- [camelot-php](https://github.com/randomstate/camelot-php) provides a [PHP](https://www.php.net/) wrapper on Camelot. The [Contributor's Guide](https://camelot-py.readthedocs.io/en/master/dev/contributing.html) has detailed information about contributing code, documentation, tests and more. We've included some basic information in this README.
## Contributing ### Source code
The [Contributor's Guide](https://camelot-py.readthedocs.io/en/master/dev/contributing.html) has detailed information about contributing issues, documentation, code, and tests. You can check the latest sources with:
<pre>
$ git clone https://www.github.com/socialcopsdev/camelot
</pre>
### Setting up a development environment
You can install the development dependencies easily, using pip:
<pre>
$ pip install camelot-py[dev]
</pre>
### Testing
After installation, you can run tests using:
<pre>
$ python setup.py test
</pre>
## Versioning ## Versioning
Camelot uses [Semantic Versioning](https://semver.org/). For the available versions, see the tags on this repository. For the changelog, you can check out [HISTORY.md](https://github.com/camelot-dev/camelot/blob/master/HISTORY.md). Camelot uses [Semantic Versioning](https://semver.org/). For the available versions, see the tags on this repository. For the changelog, you can check out [HISTORY.md](https://github.com/socialcopsdev/camelot/blob/master/HISTORY.md).
## License ## License
This project is licensed under the MIT License, see the [LICENSE](https://github.com/camelot-dev/camelot/blob/master/LICENSE) file for details. This project is licensed under the MIT License, see the [LICENSE](https://github.com/socialcopsdev/camelot/blob/master/LICENSE) file for details.

View File

@ -2,16 +2,26 @@
import logging import logging
from click import HelpFormatter
from .__version__ import __version__ from .__version__ import __version__
from .io import read_pdf from .io import read_pdf
from .plotting import PlotMethods from .plotting import PlotMethods
# set up logging def _write_usage(self, prog, args='', prefix='Usage: '):
logger = logging.getLogger("camelot") return self._write_usage('camelot', args, prefix=prefix)
format_string = "%(asctime)s - %(levelname)s - %(message)s"
formatter = logging.Formatter(format_string, datefmt="%Y-%m-%dT%H:%M:%S") # monkey patch click.HelpFormatter
HelpFormatter._write_usage = HelpFormatter.write_usage
HelpFormatter.write_usage = _write_usage
# set up logging
logger = logging.getLogger('camelot')
format_string = '%(asctime)s - %(levelname)s - %(message)s'
formatter = logging.Formatter(format_string, datefmt='%Y-%m-%dT%H:%M:%S')
handler = logging.StreamHandler() handler = logging.StreamHandler()
handler.setFormatter(formatter) handler.setFormatter(formatter)

View File

@ -1,7 +1,9 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
__all__ = ("main",)
__all__ = ('main',)
def main(): def main():

View File

@ -1,23 +1,23 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
VERSION = (0, 10, 1) VERSION = (0, 6, 0)
PRERELEASE = None # alpha, beta or rc PRERELEASE = None # alpha, beta or rc
REVISION = None REVISION = None
def generate_version(version, prerelease=None, revision=None): def generate_version(version, prerelease=None, revision=None):
version_parts = [".".join(map(str, version))] version_parts = ['.'.join(map(str, version))]
if prerelease is not None: if prerelease is not None:
version_parts.append(f"-{prerelease}") version_parts.append('-{}'.format(prerelease))
if revision is not None: if revision is not None:
version_parts.append(f".{revision}") version_parts.append('.{}'.format(revision))
return "".join(version_parts) return ''.join(version_parts)
__title__ = "camelot-py" __title__ = 'camelot-py'
__description__ = "PDF Table Extraction for Humans." __description__ = 'PDF Table Extraction for Humans.'
__url__ = "http://camelot-py.readthedocs.io/" __url__ = 'http://camelot-py.readthedocs.io/'
__version__ = generate_version(VERSION, prerelease=PRERELEASE, revision=REVISION) __version__ = generate_version(VERSION, prerelease=PRERELEASE, revision=REVISION)
__author__ = "Vinayak Mehta" __author__ = 'Vinayak Mehta'
__author_email__ = "vmehta94@gmail.com" __author_email__ = 'vmehta94@gmail.com'
__license__ = "MIT License" __license__ = 'MIT License'

View File

@ -1,3 +0,0 @@
# -*- coding: utf-8 -*-
from .image_conversion import ImageConversionBackend

View File

@ -1,47 +0,0 @@
# -*- coding: utf-8 -*-
import sys
import ctypes
from ctypes.util import find_library
def installed_posix():
library = find_library("gs")
return library is not None
def installed_windows():
library = find_library(
"".join(("gsdll", str(ctypes.sizeof(ctypes.c_voidp) * 8), ".dll"))
)
return library is not None
class GhostscriptBackend(object):
def installed(self):
if sys.platform in ["linux", "darwin"]:
return installed_posix()
elif sys.platform == "win32":
return installed_windows()
else:
return installed_posix()
def convert(self, pdf_path, png_path, resolution=300):
if not self.installed():
raise OSError(
"Ghostscript is not installed. You can install it using the instructions"
" here: https://camelot-py.readthedocs.io/en/master/user/install-deps.html"
)
import ghostscript
gs_command = [
"gs",
"-q",
"-sDEVICE=png16m",
"-o",
png_path,
f"-r{resolution}",
pdf_path,
]
ghostscript.Ghostscript(*gs_command)

View File

@ -1,40 +0,0 @@
# -*- coding: utf-8 -*-
from .poppler_backend import PopplerBackend
from .ghostscript_backend import GhostscriptBackend
BACKENDS = {"poppler": PopplerBackend, "ghostscript": GhostscriptBackend}
class ImageConversionBackend(object):
def __init__(self, backend="poppler", use_fallback=True):
if backend not in BACKENDS.keys():
raise ValueError(f"Image conversion backend '{backend}' not supported")
self.backend = backend
self.use_fallback = use_fallback
self.fallbacks = list(filter(lambda x: x != backend, BACKENDS.keys()))
def convert(self, pdf_path, png_path):
try:
converter = BACKENDS[self.backend]()
converter.convert(pdf_path, png_path)
except Exception as e:
import sys
if self.use_fallback:
for fallback in self.fallbacks:
try:
converter = BACKENDS[fallback]()
converter.convert(pdf_path, png_path)
except Exception as e:
raise type(e)(
str(e) + f" with image conversion backend '{fallback}'"
).with_traceback(sys.exc_info()[2])
continue
else:
break
else:
raise type(e)(
str(e) + f" with image conversion backend '{self.backend}'"
).with_traceback(sys.exc_info()[2])

View File

@ -1,22 +0,0 @@
# -*- coding: utf-8 -*-
import shutil
import subprocess
class PopplerBackend(object):
def convert(self, pdf_path, png_path):
pdftopng_executable = shutil.which("pdftopng")
if pdftopng_executable is None:
raise OSError(
"pdftopng is not installed. You can install it using the 'pip install pdftopng' command."
)
pdftopng_command = [pdftopng_executable, pdf_path, png_path]
try:
subprocess.check_output(
" ".join(pdftopng_command), stderr=subprocess.STDOUT, shell=True
)
except subprocess.CalledProcessError as e:
raise ValueError(e.output)

View File

@ -3,7 +3,6 @@
import logging import logging
import click import click
try: try:
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
except ImportError: except ImportError:
@ -14,7 +13,7 @@ else:
from . import __version__, read_pdf, plot from . import __version__, read_pdf, plot
logger = logging.getLogger("camelot") logger = logging.getLogger('camelot')
logger.setLevel(logging.INFO) logger.setLevel(logging.INFO)
@ -29,49 +28,25 @@ class Config(object):
pass_config = click.make_pass_decorator(Config) pass_config = click.make_pass_decorator(Config)
@click.group(name="camelot") @click.group()
@click.version_option(version=__version__) @click.version_option(version=__version__)
@click.option("-q", "--quiet", is_flag=False, help="Suppress logs and warnings.") @click.option('-q', '--quiet', is_flag=False, help='Suppress logs and warnings.')
@click.option( @click.option('-p', '--pages', default='1', help='Comma-separated page numbers.'
"-p", ' Example: 1,3,4 or 1,4-end.')
"--pages", @click.option('-pw', '--password', help='Password for decryption.')
default="1", @click.option('-o', '--output', help='Output file path.')
help="Comma-separated page numbers." " Example: 1,3,4 or 1,4-end or all.", @click.option('-f', '--format',
) type=click.Choice(['csv', 'json', 'excel', 'html']),
@click.option("-pw", "--password", help="Password for decryption.") help='Output file format.')
@click.option("-o", "--output", help="Output file path.") @click.option('-z', '--zip', is_flag=True, help='Create ZIP archive.')
@click.option( @click.option('-split', '--split_text', is_flag=True,
"-f", help='Split text that spans across multiple cells.')
"--format", @click.option('-flag', '--flag_size', is_flag=True, help='Flag text based on'
type=click.Choice(["csv", "excel", "html", "json", "markdown", "sqlite"]), ' font size. Useful to detect super/subscripts.')
help="Output file format.", @click.option('-strip', '--strip_text', help='Characters that should be stripped from a string before'
) ' assigning it to a cell.')
@click.option("-z", "--zip", is_flag=True, help="Create ZIP archive.") @click.option('-M', '--margins', nargs=3, default=(1.0, 0.5, 0.1),
@click.option( help='PDFMiner char_margin, line_margin and word_margin.')
"-split",
"--split_text",
is_flag=True,
help="Split text that spans across multiple cells.",
)
@click.option(
"-flag",
"--flag_size",
is_flag=True,
help="Flag text based on" " font size. Useful to detect super/subscripts.",
)
@click.option(
"-strip",
"--strip_text",
help="Characters that should be stripped from a string before"
" assigning it to a cell.",
)
@click.option(
"-M",
"--margins",
nargs=3,
default=(1.0, 0.5, 0.1),
help="PDFMiner char_margin, line_margin and word_margin.",
)
@click.pass_context @click.pass_context
def cli(ctx, *args, **kwargs): def cli(ctx, *args, **kwargs):
"""Camelot: PDF Table Extraction for Humans""" """Camelot: PDF Table Extraction for Humans"""
@ -80,131 +55,74 @@ def cli(ctx, *args, **kwargs):
ctx.obj.set_config(key, value) ctx.obj.set_config(key, value)
@cli.command("lattice") @cli.command('lattice')
@click.option( @click.option('-T', '--table_areas', default=[], multiple=True,
"-R", help='Table areas to process. Example: x1,y1,x2,y2'
"--table_regions", ' where x1, y1 -> left-top and x2, y2 -> right-bottom.')
default=[], @click.option('-back', '--process_background', is_flag=True,
multiple=True, help='Process background lines.')
help="Page regions to analyze. Example: x1,y1,x2,y2" @click.option('-scale', '--line_size_scaling', default=15,
" where x1, y1 -> left-top and x2, y2 -> right-bottom.", help='Line size scaling factor. The larger the value,'
) ' the smaller the detected lines.')
@click.option( @click.option('-copy', '--copy_text', default=[], type=click.Choice(['h', 'v']),
"-T", multiple=True, help='Direction in which text in a spanning cell'
"--table_areas", ' will be copied over.')
default=[], @click.option('-shift', '--shift_text', default=['l', 't'],
multiple=True, type=click.Choice(['', 'l', 'r', 't', 'b']), multiple=True,
help="Table areas to process. Example: x1,y1,x2,y2" help='Direction in which text in a spanning cell will flow.')
" where x1, y1 -> left-top and x2, y2 -> right-bottom.", @click.option('-l', '--line_tol', default=2,
) help='Tolerance parameter used to merge close vertical'
@click.option( ' and horizontal lines.')
"-back", "--process_background", is_flag=True, help="Process background lines." @click.option('-j', '--joint_tol', default=2,
) help='Tolerance parameter used to decide whether'
@click.option( ' the detected lines and points lie close to each other.')
"-scale", @click.option('-block', '--threshold_blocksize', default=15,
"--line_scale", help='For adaptive thresholding, size of a pixel'
default=15, ' neighborhood that is used to calculate a threshold value for'
help="Line size scaling factor. The larger the value," ' the pixel. Example: 3, 5, 7, and so on.')
" the smaller the detected lines.", @click.option('-const', '--threshold_constant', default=-2,
) help='For adaptive thresholding, constant subtracted'
@click.option( ' from the mean or weighted mean. Normally, it is positive but'
"-copy", ' may be zero or negative as well.')
"--copy_text", @click.option('-I', '--iterations', default=0,
default=[], help='Number of times for erosion/dilation will be applied.')
type=click.Choice(["h", "v"]), @click.option('-res', '--resolution', default=300,
multiple=True, help='Resolution used for PDF to PNG conversion.')
help="Direction in which text in a spanning cell" " will be copied over.", @click.option('-plot', '--plot_type',
) type=click.Choice(['text', 'grid', 'contour', 'joint', 'line']),
@click.option( help='Plot elements found on PDF page for visual debugging.')
"-shift", @click.argument('filepath', type=click.Path(exists=True))
"--shift_text",
default=["l", "t"],
type=click.Choice(["", "l", "r", "t", "b"]),
multiple=True,
help="Direction in which text in a spanning cell will flow.",
)
@click.option(
"-l",
"--line_tol",
default=2,
help="Tolerance parameter used to merge close vertical" " and horizontal lines.",
)
@click.option(
"-j",
"--joint_tol",
default=2,
help="Tolerance parameter used to decide whether"
" the detected lines and points lie close to each other.",
)
@click.option(
"-block",
"--threshold_blocksize",
default=15,
help="For adaptive thresholding, size of a pixel"
" neighborhood that is used to calculate a threshold value for"
" the pixel. Example: 3, 5, 7, and so on.",
)
@click.option(
"-const",
"--threshold_constant",
default=-2,
help="For adaptive thresholding, constant subtracted"
" from the mean or weighted mean. Normally, it is positive but"
" may be zero or negative as well.",
)
@click.option(
"-I",
"--iterations",
default=0,
help="Number of times for erosion/dilation will be applied.",
)
@click.option(
"-res",
"--resolution",
default=300,
help="Resolution used for PDF to PNG conversion.",
)
@click.option(
"-plot",
"--plot_type",
type=click.Choice(["text", "grid", "contour", "joint", "line"]),
help="Plot elements found on PDF page for visual debugging.",
)
@click.argument("filepath", type=click.Path(exists=True))
@pass_config @pass_config
def lattice(c, *args, **kwargs): def lattice(c, *args, **kwargs):
"""Use lines between text to parse the table.""" """Use lines between text to parse the table."""
conf = c.config conf = c.config
pages = conf.pop("pages") pages = conf.pop('pages')
output = conf.pop("output") output = conf.pop('output')
f = conf.pop("format") f = conf.pop('format')
compress = conf.pop("zip") compress = conf.pop('zip')
quiet = conf.pop("quiet") quiet = conf.pop('quiet')
plot_type = kwargs.pop("plot_type") plot_type = kwargs.pop('plot_type')
filepath = kwargs.pop("filepath") filepath = kwargs.pop('filepath')
kwargs.update(conf) kwargs.update(conf)
table_regions = list(kwargs["table_regions"]) table_areas = list(kwargs['table_areas'])
kwargs["table_regions"] = None if not table_regions else table_regions kwargs['table_areas'] = None if not table_areas else table_areas
table_areas = list(kwargs["table_areas"]) copy_text = list(kwargs['copy_text'])
kwargs["table_areas"] = None if not table_areas else table_areas kwargs['copy_text'] = None if not copy_text else copy_text
copy_text = list(kwargs["copy_text"]) kwargs['shift_text'] = list(kwargs['shift_text'])
kwargs["copy_text"] = None if not copy_text else copy_text
kwargs["shift_text"] = list(kwargs["shift_text"])
if plot_type is not None: if plot_type is not None:
if not _HAS_MPL: if not _HAS_MPL:
raise ImportError("matplotlib is required for plotting.") raise ImportError('matplotlib is required for plotting.')
else: else:
if output is None: if output is None:
raise click.UsageError("Please specify output file path using --output") raise click.UsageError('Please specify output file path using --output')
if f is None: if f is None:
raise click.UsageError("Please specify output file format using --format") raise click.UsageError('Please specify output file format using --format')
tables = read_pdf( tables = read_pdf(filepath, pages=pages, flavor='lattice',
filepath, pages=pages, flavor="lattice", suppress_stdout=quiet, **kwargs suppress_stdout=quiet, **kwargs)
) click.echo('Found {} tables'.format(tables.n))
click.echo(f"Found {tables.n} tables")
if plot_type is not None: if plot_type is not None:
for table in tables: for table in tables:
plot(table, kind=plot_type) plot(table, kind=plot_type)
@ -213,89 +131,52 @@ def lattice(c, *args, **kwargs):
tables.export(output, f=f, compress=compress) tables.export(output, f=f, compress=compress)
@cli.command("stream") @cli.command('stream')
@click.option( @click.option('-T', '--table_areas', default=[], multiple=True,
"-R", help='Table areas to process. Example: x1,y1,x2,y2'
"--table_regions", ' where x1, y1 -> left-top and x2, y2 -> right-bottom.')
default=[], @click.option('-C', '--columns', default=[], multiple=True,
multiple=True, help='X coordinates of column separators.')
help="Page regions to analyze. Example: x1,y1,x2,y2" @click.option('-e', '--edge_tol', default=50, help='Tolerance parameter'
" where x1, y1 -> left-top and x2, y2 -> right-bottom.", ' for extending textedges vertically.')
) @click.option('-r', '--row_tol', default=2, help='Tolerance parameter'
@click.option( ' used to combine text vertically, to generate rows.')
"-T", @click.option('-c', '--column_tol', default=0, help='Tolerance parameter'
"--table_areas", ' used to combine text horizontally, to generate columns.')
default=[], @click.option('-plot', '--plot_type',
multiple=True, type=click.Choice(['text', 'grid', 'contour', 'textedge']),
help="Table areas to process. Example: x1,y1,x2,y2" help='Plot elements found on PDF page for visual debugging.')
" where x1, y1 -> left-top and x2, y2 -> right-bottom.", @click.argument('filepath', type=click.Path(exists=True))
)
@click.option(
"-C",
"--columns",
default=[],
multiple=True,
help="X coordinates of column separators.",
)
@click.option(
"-e",
"--edge_tol",
default=50,
help="Tolerance parameter" " for extending textedges vertically.",
)
@click.option(
"-r",
"--row_tol",
default=2,
help="Tolerance parameter" " used to combine text vertically, to generate rows.",
)
@click.option(
"-c",
"--column_tol",
default=0,
help="Tolerance parameter"
" used to combine text horizontally, to generate columns.",
)
@click.option(
"-plot",
"--plot_type",
type=click.Choice(["text", "grid", "contour", "textedge"]),
help="Plot elements found on PDF page for visual debugging.",
)
@click.argument("filepath", type=click.Path(exists=True))
@pass_config @pass_config
def stream(c, *args, **kwargs): def stream(c, *args, **kwargs):
"""Use spaces between text to parse the table.""" """Use spaces between text to parse the table."""
conf = c.config conf = c.config
pages = conf.pop("pages") pages = conf.pop('pages')
output = conf.pop("output") output = conf.pop('output')
f = conf.pop("format") f = conf.pop('format')
compress = conf.pop("zip") compress = conf.pop('zip')
quiet = conf.pop("quiet") quiet = conf.pop('quiet')
plot_type = kwargs.pop("plot_type") plot_type = kwargs.pop('plot_type')
filepath = kwargs.pop("filepath") filepath = kwargs.pop('filepath')
kwargs.update(conf) kwargs.update(conf)
table_regions = list(kwargs["table_regions"]) table_areas = list(kwargs['table_areas'])
kwargs["table_regions"] = None if not table_regions else table_regions kwargs['table_areas'] = None if not table_areas else table_areas
table_areas = list(kwargs["table_areas"]) columns = list(kwargs['columns'])
kwargs["table_areas"] = None if not table_areas else table_areas kwargs['columns'] = None if not columns else columns
columns = list(kwargs["columns"])
kwargs["columns"] = None if not columns else columns
if plot_type is not None: if plot_type is not None:
if not _HAS_MPL: if not _HAS_MPL:
raise ImportError("matplotlib is required for plotting.") raise ImportError('matplotlib is required for plotting.')
else: else:
if output is None: if output is None:
raise click.UsageError("Please specify output file path using --output") raise click.UsageError('Please specify output file path using --output')
if f is None: if f is None:
raise click.UsageError("Please specify output file format using --format") raise click.UsageError('Please specify output file format using --format')
tables = read_pdf( tables = read_pdf(filepath, pages=pages, flavor='stream',
filepath, pages=pages, flavor="stream", suppress_stdout=quiet, **kwargs suppress_stdout=quiet, **kwargs)
) click.echo('Found {} tables'.format(tables.n))
click.echo(f"Found {tables.n} tables")
if plot_type is not None: if plot_type is not None:
for table in tables: for table in tables:
plot(table, kind=plot_type) plot(table, kind=plot_type)

View File

@ -1,7 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
import sqlite3
import zipfile import zipfile
import tempfile import tempfile
from itertools import chain from itertools import chain
@ -42,8 +41,7 @@ class TextEdge(object):
TEXTEDGE_REQUIRED_ELEMENTS horizontal text rows. TEXTEDGE_REQUIRED_ELEMENTS horizontal text rows.
""" """
def __init__(self, x, y0, y1, align='left'):
def __init__(self, x, y0, y1, align="left"):
self.x = x self.x = x
self.y0 = y0 self.y0 = y0
self.y1 = y1 self.y1 = y1
@ -52,12 +50,8 @@ class TextEdge(object):
self.is_valid = False self.is_valid = False
def __repr__(self): def __repr__(self):
x = round(self.x, 2) return '<TextEdge x={} y0={} y1={} align={} valid={}>'.format(
y0 = round(self.y0, 2) round(self.x, 2), round(self.y0, 2), round(self.y1, 2), self.align, self.is_valid)
y1 = round(self.y1, 2)
return (
f"<TextEdge x={x} y0={y0} y1={y1} align={self.align} valid={self.is_valid}>"
)
def update_coords(self, x, y0, edge_tol=50): def update_coords(self, x, y0, edge_tol=50):
"""Updates the text edge's x and bottom y coordinates and sets """Updates the text edge's x and bottom y coordinates and sets
@ -78,10 +72,9 @@ class TextEdges(object):
the PDF page. The dict has three keys based on the alignments, the PDF page. The dict has three keys based on the alignments,
and each key's value is a list of camelot.core.TextEdge objects. and each key's value is a list of camelot.core.TextEdge objects.
""" """
def __init__(self, edge_tol=50): def __init__(self, edge_tol=50):
self.edge_tol = edge_tol self.edge_tol = edge_tol
self._textedges = {"left": [], "right": [], "middle": []} self._textedges = {'left': [], 'right': [], 'middle': []}
@staticmethod @staticmethod
def get_x_coord(textline, align): def get_x_coord(textline, align):
@ -91,7 +84,7 @@ class TextEdges(object):
x_left = textline.x0 x_left = textline.x0
x_right = textline.x1 x_right = textline.x1
x_middle = x_left + (x_right - x_left) / 2.0 x_middle = x_left + (x_right - x_left) / 2.0
x_coord = {"left": x_left, "middle": x_middle, "right": x_right} x_coord = {'left': x_left, 'middle': x_middle, 'right': x_right}
return x_coord[align] return x_coord[align]
def find(self, x_coord, align): def find(self, x_coord, align):
@ -104,7 +97,8 @@ class TextEdges(object):
return None return None
def add(self, textline, align): def add(self, textline, align):
"""Adds a new text edge to the current dict.""" """Adds a new text edge to the current dict.
"""
x = self.get_x_coord(textline, align) x = self.get_x_coord(textline, align)
y0 = textline.y0 y0 = textline.y0
y1 = textline.y1 y1 = textline.y1
@ -112,23 +106,23 @@ class TextEdges(object):
self._textedges[align].append(te) self._textedges[align].append(te)
def update(self, textline): def update(self, textline):
"""Updates an existing text edge in the current dict.""" """Updates an existing text edge in the current dict.
for align in ["left", "right", "middle"]: """
for align in ['left', 'right', 'middle']:
x_coord = self.get_x_coord(textline, align) x_coord = self.get_x_coord(textline, align)
idx = self.find(x_coord, align) idx = self.find(x_coord, align)
if idx is None: if idx is None:
self.add(textline, align) self.add(textline, align)
else: else:
self._textedges[align][idx].update_coords( self._textedges[align][idx].update_coords(
x_coord, textline.y0, edge_tol=self.edge_tol x_coord, textline.y0, edge_tol=self.edge_tol)
)
def generate(self, textlines): def generate(self, textlines):
"""Generates the text edges dict based on horizontal text """Generates the text edges dict based on horizontal text
rows. rows.
""" """
for tl in textlines: for tl in textlines:
if len(tl.get_text().strip()) > 1: # TODO: hacky if len(tl.get_text().strip()) > 1: # TODO: hacky
self.update(tl) self.update(tl)
def get_relevant(self): def get_relevant(self):
@ -137,15 +131,9 @@ class TextEdges(object):
the most. the most.
""" """
intersections_sum = { intersections_sum = {
"left": sum( 'left': sum(te.intersections for te in self._textedges['left'] if te.is_valid),
te.intersections for te in self._textedges["left"] if te.is_valid 'right': sum(te.intersections for te in self._textedges['right'] if te.is_valid),
), 'middle': sum(te.intersections for te in self._textedges['middle'] if te.is_valid)
"right": sum(
te.intersections for te in self._textedges["right"] if te.is_valid
),
"middle": sum(
te.intersections for te in self._textedges["middle"] if te.is_valid
),
} }
# TODO: naive # TODO: naive
@ -158,7 +146,6 @@ class TextEdges(object):
"""Returns a dict of interesting table areas on the PDF page """Returns a dict of interesting table areas on the PDF page
calculated using relevant text edges. calculated using relevant text edges.
""" """
def pad(area, average_row_height): def pad(area, average_row_height):
x0 = area[0] - TABLE_AREA_PADDING x0 = area[0] - TABLE_AREA_PADDING
y0 = area[1] - TABLE_AREA_PADDING y0 = area[1] - TABLE_AREA_PADDING
@ -187,11 +174,7 @@ class TextEdges(object):
else: else:
table_areas.pop(found) table_areas.pop(found)
updated_area = ( updated_area = (
found[0], found[0], min(te.y0, found[1]), max(found[2], te.x), max(found[3], te.y1))
min(te.y0, found[1]),
max(found[2], te.x),
max(found[3], te.y1),
)
table_areas[updated_area] = None table_areas[updated_area] = None
# extend table areas based on textlines that overlap # extend table areas based on textlines that overlap
@ -212,11 +195,7 @@ class TextEdges(object):
if found is not None: if found is not None:
table_areas.pop(found) table_areas.pop(found)
updated_area = ( updated_area = (
min(tl.x0, found[0]), min(tl.x0, found[0]), min(tl.y0, found[1]), max(found[2], tl.x1), max(found[3], tl.y1))
min(tl.y0, found[1]),
max(found[2], tl.x1),
max(found[3], tl.y1),
)
table_areas[updated_area] = None table_areas[updated_area] = None
average_textline_height = sum_textline_height / float(len(textlines)) average_textline_height = sum_textline_height / float(len(textlines))
@ -285,14 +264,11 @@ class Cell(object):
self.bottom = False self.bottom = False
self.hspan = False self.hspan = False
self.vspan = False self.vspan = False
self._text = "" self._text = ''
def __repr__(self): def __repr__(self):
x1 = round(self.x1) return '<Cell x1={} y1={} x2={} y2={}>'.format(
y1 = round(self.y1) round(self.x1, 2), round(self.y1, 2), round(self.x2, 2), round(self.y2, 2))
x2 = round(self.x2)
y2 = round(self.y2)
return f"<Cell x1={x1} y1={y1} x2={x2} y2={y2}>"
@property @property
def text(self): def text(self):
@ -300,11 +276,12 @@ class Cell(object):
@text.setter @text.setter
def text(self, t): def text(self, t):
self._text = "".join([self._text, t]) self._text = ''.join([self._text, t])
@property @property
def bound(self): def bound(self):
"""The number of sides on which the cell is bounded.""" """The number of sides on which the cell is bounded.
"""
return self.top + self.bottom + self.left + self.right return self.top + self.bottom + self.left + self.right
@ -336,11 +313,11 @@ class Table(object):
PDF page number. PDF page number.
""" """
def __init__(self, cols, rows): def __init__(self, cols, rows):
self.cols = cols self.cols = cols
self.rows = rows self.rows = rows
self.cells = [[Cell(c[0], r[1], c[1], r[0]) for c in cols] for r in rows] self.cells = [[Cell(c[0], r[1], c[1], r[0])
for c in cols] for r in rows]
self.df = None self.df = None
self.shape = (0, 0) self.shape = (0, 0)
self.accuracy = 0 self.accuracy = 0
@ -349,18 +326,12 @@ class Table(object):
self.page = None self.page = None
def __repr__(self): def __repr__(self):
return f"<{self.__class__.__name__} shape={self.shape}>" return '<{} shape={}>'.format(self.__class__.__name__, self.shape)
def __lt__(self, other):
if self.page == other.page:
if self.order < other.order:
return True
if self.page < other.page:
return True
@property @property
def data(self): def data(self):
"""Returns two-dimensional list of strings in table.""" """Returns two-dimensional list of strings in table.
"""
d = [] d = []
for row in self.cells: for row in self.cells:
d.append([cell.text.strip() for cell in row]) d.append([cell.text.strip() for cell in row])
@ -373,15 +344,16 @@ class Table(object):
""" """
# pretty? # pretty?
report = { report = {
"accuracy": round(self.accuracy, 2), 'accuracy': round(self.accuracy, 2),
"whitespace": round(self.whitespace, 2), 'whitespace': round(self.whitespace, 2),
"order": self.order, 'order': self.order,
"page": self.page, 'page': self.page
} }
return report return report
def set_all_edges(self): def set_all_edges(self):
"""Sets all table edges to True.""" """Sets all table edges to True.
"""
for row in self.cells: for row in self.cells:
for cell in row: for cell in row:
cell.left = cell.right = cell.top = cell.bottom = True cell.left = cell.right = cell.top = cell.bottom = True
@ -403,21 +375,12 @@ class Table(object):
for v in vertical: for v in vertical:
# find closest x coord # find closest x coord
# iterate over y coords and find closest start and end points # iterate over y coords and find closest start and end points
i = [ i = [i for i, t in enumerate(self.cols)
i if np.isclose(v[0], t[0], atol=joint_tol)]
for i, t in enumerate(self.cols) j = [j for j, t in enumerate(self.rows)
if np.isclose(v[0], t[0], atol=joint_tol) if np.isclose(v[3], t[0], atol=joint_tol)]
] k = [k for k, t in enumerate(self.rows)
j = [ if np.isclose(v[1], t[0], atol=joint_tol)]
j
for j, t in enumerate(self.rows)
if np.isclose(v[3], t[0], atol=joint_tol)
]
k = [
k
for k, t in enumerate(self.rows)
if np.isclose(v[1], t[0], atol=joint_tol)
]
if not j: if not j:
continue continue
J = j[0] J = j[0]
@ -463,21 +426,12 @@ class Table(object):
for h in horizontal: for h in horizontal:
# find closest y coord # find closest y coord
# iterate over x coords and find closest start and end points # iterate over x coords and find closest start and end points
i = [ i = [i for i, t in enumerate(self.rows)
i if np.isclose(h[1], t[0], atol=joint_tol)]
for i, t in enumerate(self.rows) j = [j for j, t in enumerate(self.cols)
if np.isclose(h[1], t[0], atol=joint_tol) if np.isclose(h[0], t[0], atol=joint_tol)]
] k = [k for k, t in enumerate(self.cols)
j = [ if np.isclose(h[2], t[0], atol=joint_tol)]
j
for j, t in enumerate(self.cols)
if np.isclose(h[0], t[0], atol=joint_tol)
]
k = [
k
for k, t in enumerate(self.cols)
if np.isclose(h[2], t[0], atol=joint_tol)
]
if not j: if not j:
continue continue
J = j[0] J = j[0]
@ -523,7 +477,8 @@ class Table(object):
return self return self
def set_border(self): def set_border(self):
"""Sets table border edges to True.""" """Sets table border edges to True.
"""
for r in range(len(self.rows)): for r in range(len(self.rows)):
self.cells[r][0].left = True self.cells[r][0].left = True
self.cells[r][len(self.cols) - 1].right = True self.cells[r][len(self.cols) - 1].right = True
@ -574,7 +529,12 @@ class Table(object):
Output filepath. Output filepath.
""" """
kw = {"encoding": "utf-8", "index": False, "header": False, "quoting": 1} kw = {
'encoding': 'utf-8',
'index': False,
'header': False,
'quoting': 1
}
kw.update(kwargs) kw.update(kwargs)
self.df.to_csv(path, **kw) self.df.to_csv(path, **kw)
@ -589,10 +549,12 @@ class Table(object):
Output filepath. Output filepath.
""" """
kw = {"orient": "records"} kw = {
'orient': 'records'
}
kw.update(kwargs) kw.update(kwargs)
json_string = self.df.to_json(**kw) json_string = self.df.to_json(**kw)
with open(path, "w") as f: with open(path, 'w') as f:
f.write(json_string) f.write(json_string)
def to_excel(self, path, **kwargs): def to_excel(self, path, **kwargs):
@ -607,8 +569,8 @@ class Table(object):
""" """
kw = { kw = {
"sheet_name": f"page-{self.page}-table-{self.order}", 'sheet_name': 'page-{}-table-{}'.format(self.page, self.order),
"encoding": "utf-8", 'encoding': 'utf-8'
} }
kw.update(kwargs) kw.update(kwargs)
writer = pd.ExcelWriter(path) writer = pd.ExcelWriter(path)
@ -627,43 +589,9 @@ class Table(object):
""" """
html_string = self.df.to_html(**kwargs) html_string = self.df.to_html(**kwargs)
with open(path, "w", encoding="utf-8") as f: with open(path, 'w') as f:
f.write(html_string) f.write(html_string)
def to_markdown(self, path, **kwargs):
"""Writes Table to a Markdown file.
For kwargs, check :meth:`pandas.DataFrame.to_markdown`.
Parameters
----------
path : str
Output filepath.
"""
md_string = self.df.to_markdown(**kwargs)
with open(path, "w", encoding="utf-8") as f:
f.write(md_string)
def to_sqlite(self, path, **kwargs):
"""Writes Table to sqlite database.
For kwargs, check :meth:`pandas.DataFrame.to_sql`.
Parameters
----------
path : str
Output filepath.
"""
kw = {"if_exists": "replace", "index": False}
kw.update(kwargs)
conn = sqlite3.connect(path)
table_name = f"page-{self.page}-table-{self.order}"
self.df.to_sql(table_name, conn, **kw)
conn.commit()
conn.close()
class TableList(object): class TableList(object):
"""Defines a list of camelot.core.Table objects. Each table can """Defines a list of camelot.core.Table objects. Each table can
@ -675,12 +603,12 @@ class TableList(object):
Number of tables in the list. Number of tables in the list.
""" """
def __init__(self, tables): def __init__(self, tables):
self._tables = tables self._tables = tables
def __repr__(self): def __repr__(self):
return f"<{self.__class__.__name__} n={self.n}>" return '<{} n={}>'.format(
self.__class__.__name__, self.n)
def __len__(self): def __len__(self):
return len(self._tables) return len(self._tables)
@ -690,35 +618,37 @@ class TableList(object):
@staticmethod @staticmethod
def _format_func(table, f): def _format_func(table, f):
return getattr(table, f"to_{f}") return getattr(table, 'to_{}'.format(f))
@property @property
def n(self): def n(self):
return len(self) return len(self)
def _write_file(self, f=None, **kwargs): def _write_file(self, f=None, **kwargs):
dirname = kwargs.get("dirname") dirname = kwargs.get('dirname')
root = kwargs.get("root") root = kwargs.get('root')
ext = kwargs.get("ext") ext = kwargs.get('ext')
for table in self._tables: for table in self._tables:
filename = f"{root}-page-{table.page}-table-{table.order}{ext}" filename = os.path.join('{}-page-{}-table-{}{}'.format(
root, table.page, table.order, ext))
filepath = os.path.join(dirname, filename) filepath = os.path.join(dirname, filename)
to_format = self._format_func(table, f) to_format = self._format_func(table, f)
to_format(filepath) to_format(filepath)
def _compress_dir(self, **kwargs): def _compress_dir(self, **kwargs):
path = kwargs.get("path") path = kwargs.get('path')
dirname = kwargs.get("dirname") dirname = kwargs.get('dirname')
root = kwargs.get("root") root = kwargs.get('root')
ext = kwargs.get("ext") ext = kwargs.get('ext')
zipname = os.path.join(os.path.dirname(path), root) + ".zip" zipname = os.path.join(os.path.dirname(path), root) + '.zip'
with zipfile.ZipFile(zipname, "w", allowZip64=True) as z: with zipfile.ZipFile(zipname, 'w', allowZip64=True) as z:
for table in self._tables: for table in self._tables:
filename = f"{root}-page-{table.page}-table-{table.order}{ext}" filename = os.path.join('{}-page-{}-table-{}{}'.format(
root, table.page, table.order, ext))
filepath = os.path.join(dirname, filename) filepath = os.path.join(dirname, filename)
z.write(filepath, os.path.basename(filepath)) z.write(filepath, os.path.basename(filepath))
def export(self, path, f="csv", compress=False): def export(self, path, f='csv', compress=False):
"""Exports the list of tables to specified file format. """Exports the list of tables to specified file format.
Parameters Parameters
@ -726,7 +656,7 @@ class TableList(object):
path : str path : str
Output filepath. Output filepath.
f : str f : str
File format. Can be csv, excel, html, json, markdown or sqlite. File format. Can be csv, json, excel and html.
compress : bool compress : bool
Whether or not to add files to a ZIP archive. Whether or not to add files to a ZIP archive.
@ -737,28 +667,25 @@ class TableList(object):
if compress: if compress:
dirname = tempfile.mkdtemp() dirname = tempfile.mkdtemp()
kwargs = {"path": path, "dirname": dirname, "root": root, "ext": ext} kwargs = {
'path': path,
'dirname': dirname,
'root': root,
'ext': ext
}
if f in ["csv", "html", "json", "markdown"]: if f in ['csv', 'json', 'html']:
self._write_file(f=f, **kwargs) self._write_file(f=f, **kwargs)
if compress: if compress:
self._compress_dir(**kwargs) self._compress_dir(**kwargs)
elif f == "excel": elif f == 'excel':
filepath = os.path.join(dirname, basename) filepath = os.path.join(dirname, basename)
writer = pd.ExcelWriter(filepath) writer = pd.ExcelWriter(filepath)
for table in self._tables: for table in self._tables:
sheet_name = f"page-{table.page}-table-{table.order}" sheet_name = 'page-{}-table-{}'.format(table.page, table.order)
table.df.to_excel(writer, sheet_name=sheet_name, encoding="utf-8") table.df.to_excel(writer, sheet_name=sheet_name, encoding='utf-8')
writer.save() writer.save()
if compress: if compress:
zipname = os.path.join(os.path.dirname(path), root) + ".zip" zipname = os.path.join(os.path.dirname(path), root) + '.zip'
with zipfile.ZipFile(zipname, "w", allowZip64=True) as z: with zipfile.ZipFile(zipname, 'w', allowZip64=True) as z:
z.write(filepath, os.path.basename(filepath))
elif f == "sqlite":
filepath = os.path.join(dirname, basename)
for table in self._tables:
table.to_sqlite(filepath)
if compress:
zipname = os.path.join(os.path.dirname(path), root) + ".zip"
with zipfile.ZipFile(zipname, "w", allowZip64=True) as z:
z.write(filepath, os.path.basename(filepath)) z.write(filepath, os.path.basename(filepath))

View File

@ -7,14 +7,8 @@ from PyPDF2 import PdfFileReader, PdfFileWriter
from .core import TableList from .core import TableList
from .parsers import Stream, Lattice from .parsers import Stream, Lattice
from .utils import ( from .utils import (TemporaryDirectory, get_page_layout, get_text_objects,
TemporaryDirectory, get_rotation, is_url, download_url)
get_page_layout,
get_text_objects,
get_rotation,
is_url,
download_url,
)
class PDFHandler(object): class PDFHandler(object):
@ -28,28 +22,26 @@ class PDFHandler(object):
Filepath or URL of the PDF file. Filepath or URL of the PDF file.
pages : str, optional (default: '1') pages : str, optional (default: '1')
Comma-separated page numbers. Comma-separated page numbers.
Example: '1,3,4' or '1,4-end' or 'all'. Example: '1,3,4' or '1,4-end'.
password : str, optional (default: None) password : str, optional (default: None)
Password for decryption. Password for decryption.
""" """
def __init__(self, filepath, pages='1', password=None):
def __init__(self, filepath, pages="1", password=None):
if is_url(filepath): if is_url(filepath):
filepath = download_url(filepath) filepath = download_url(filepath)
self.filepath = filepath self.filepath = filepath
# if not filepath.lower().endswith(".pdf"): if not filepath.lower().endswith('.pdf'):
# raise NotImplementedError("File format not supported") raise NotImplementedError("File format not supported")
self.pages = self._get_pages(self.filepath, pages)
if password is None: if password is None:
self.password = "" self.password = ''
else: else:
self.password = password self.password = password
if sys.version_info[0] < 3: if sys.version_info[0] < 3:
self.password = self.password.encode("ascii") self.password = self.password.encode('ascii')
self.pages = self._get_pages(pages)
def _get_pages(self, pages): def _get_pages(self, filepath, pages):
"""Converts pages string to list of ints. """Converts pages string to list of ints.
Parameters Parameters
@ -58,7 +50,7 @@ class PDFHandler(object):
Filepath or URL of the PDF file. Filepath or URL of the PDF file.
pages : str, optional (default: '1') pages : str, optional (default: '1')
Comma-separated page numbers. Comma-separated page numbers.
Example: '1,3,4' or '1,4-end' or 'all'. Example: 1,3,4 or 1,4-end.
Returns Returns
------- -------
@ -67,31 +59,26 @@ class PDFHandler(object):
""" """
page_numbers = [] page_numbers = []
if pages == '1':
if pages == "1": page_numbers.append({'start': 1, 'end': 1})
page_numbers.append({"start": 1, "end": 1})
else: else:
with open(self.filepath, "rb") as f: infile = PdfFileReader(open(filepath, 'rb'), strict=False)
infile = PdfFileReader(f, strict=False) if infile.isEncrypted:
infile.decrypt(self.password)
if infile.isEncrypted: if pages == 'all':
infile.decrypt(self.password) page_numbers.append({'start': 1, 'end': infile.getNumPages()})
else:
if pages == "all": for r in pages.split(','):
page_numbers.append({"start": 1, "end": infile.getNumPages()}) if '-' in r:
else: a, b = r.split('-')
for r in pages.split(","): if b == 'end':
if "-" in r: b = infile.getNumPages()
a, b = r.split("-") page_numbers.append({'start': int(a), 'end': int(b)})
if b == "end": else:
b = infile.getNumPages() page_numbers.append({'start': int(r), 'end': int(r)})
page_numbers.append({"start": int(a), "end": int(b)})
else:
page_numbers.append({"start": int(r), "end": int(r)})
P = [] P = []
for p in page_numbers: for p in page_numbers:
P.extend(range(p["start"], p["end"] + 1)) P.extend(range(p['start'], p['end'] + 1))
return sorted(set(P)) return sorted(set(P))
def _save_page(self, filepath, page, temp): def _save_page(self, filepath, page, temp):
@ -107,44 +94,40 @@ class PDFHandler(object):
Tmp directory. Tmp directory.
""" """
with open(filepath, "rb") as fileobj: with open(filepath, 'rb') as fileobj:
infile = PdfFileReader(fileobj, strict=False) infile = PdfFileReader(fileobj, strict=False)
if infile.isEncrypted: if infile.isEncrypted:
infile.decrypt(self.password) infile.decrypt(self.password)
fpath = os.path.join(temp, f"page-{page}.pdf") fpath = os.path.join(temp, 'page-{0}.pdf'.format(page))
froot, fext = os.path.splitext(fpath) froot, fext = os.path.splitext(fpath)
p = infile.getPage(page - 1) p = infile.getPage(page - 1)
outfile = PdfFileWriter() outfile = PdfFileWriter()
outfile.addPage(p) outfile.addPage(p)
with open(fpath, "wb") as f: with open(fpath, 'wb') as f:
outfile.write(f) outfile.write(f)
layout, dim = get_page_layout(fpath) layout, dim = get_page_layout(fpath)
# fix rotated PDF # fix rotated PDF
chars = get_text_objects(layout, ltype="char") lttextlh = get_text_objects(layout, ltype="lh")
horizontal_text = get_text_objects(layout, ltype="horizontal_text") lttextlv = get_text_objects(layout, ltype="lv")
vertical_text = get_text_objects(layout, ltype="vertical_text") ltchar = get_text_objects(layout, ltype="char")
rotation = get_rotation(chars, horizontal_text, vertical_text) rotation = get_rotation(lttextlh, lttextlv, ltchar)
if rotation != "": if rotation != '':
fpath_new = "".join([froot.replace("page", "p"), "_rotated", fext]) fpath_new = ''.join([froot.replace('page', 'p'), '_rotated', fext])
os.rename(fpath, fpath_new) os.rename(fpath, fpath_new)
instream = open(fpath_new, "rb") infile = PdfFileReader(open(fpath_new, 'rb'), strict=False)
infile = PdfFileReader(instream, strict=False)
if infile.isEncrypted: if infile.isEncrypted:
infile.decrypt(self.password) infile.decrypt(self.password)
outfile = PdfFileWriter() outfile = PdfFileWriter()
p = infile.getPage(0) p = infile.getPage(0)
if rotation == "anticlockwise": if rotation == 'anticlockwise':
p.rotateClockwise(90) p.rotateClockwise(90)
elif rotation == "clockwise": elif rotation == 'clockwise':
p.rotateCounterClockwise(90) p.rotateCounterClockwise(90)
outfile.addPage(p) outfile.addPage(p)
with open(fpath, "wb") as f: with open(fpath, 'wb') as f:
outfile.write(f) outfile.write(f)
instream.close()
def parse( def parse(self, flavor='lattice', suppress_stdout=False, layout_kwargs={}, **kwargs):
self, flavor="lattice", suppress_stdout=False, layout_kwargs={}, **kwargs
):
"""Extracts tables by calling parser.get_tables on all single """Extracts tables by calling parser.get_tables on all single
page PDFs. page PDFs.
@ -170,11 +153,11 @@ class PDFHandler(object):
with TemporaryDirectory() as tempdir: with TemporaryDirectory() as tempdir:
for p in self.pages: for p in self.pages:
self._save_page(self.filepath, p, tempdir) self._save_page(self.filepath, p, tempdir)
pages = [os.path.join(tempdir, f"page-{p}.pdf") for p in self.pages] pages = [os.path.join(tempdir, 'page-{0}.pdf'.format(p))
parser = Lattice(**kwargs) if flavor == "lattice" else Stream(**kwargs) for p in self.pages]
parser = Lattice(**kwargs) if flavor == 'lattice' else Stream(**kwargs)
for p in pages: for p in pages:
t = parser.extract_tables( t = parser.extract_tables(p, suppress_stdout=suppress_stdout,
p, suppress_stdout=suppress_stdout, layout_kwargs=layout_kwargs layout_kwargs=layout_kwargs)
)
tables.extend(t) tables.extend(t)
return TableList(sorted(tables)) return TableList(tables)

View File

@ -1,5 +1,7 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import division
import cv2 import cv2
import numpy as np import numpy as np
@ -37,23 +39,16 @@ def adaptive_threshold(imagename, process_background=False, blocksize=15, c=-2):
if process_background: if process_background:
threshold = cv2.adaptiveThreshold( threshold = cv2.adaptiveThreshold(
gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, blocksize, c gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
) cv2.THRESH_BINARY, blocksize, c)
else: else:
threshold = cv2.adaptiveThreshold( threshold = cv2.adaptiveThreshold(
np.invert(gray), np.invert(gray), 255,
255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, blocksize, c)
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY,
blocksize,
c,
)
return img, threshold return img, threshold
def find_lines( def find_lines(threshold, direction='horizontal', line_size_scaling=15, iterations=0):
threshold, regions=None, direction="horizontal", line_scale=15, iterations=0
):
"""Finds horizontal and vertical lines by applying morphological """Finds horizontal and vertical lines by applying morphological
transformations on an image. transformations on an image.
@ -61,13 +56,9 @@ def find_lines(
---------- ----------
threshold : object threshold : object
numpy.ndarray representing the thresholded image. numpy.ndarray representing the thresholded image.
regions : list, optional (default: None)
List of page regions that may contain tables of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in image coordinate space.
direction : string, optional (default: 'horizontal') direction : string, optional (default: 'horizontal')
Specifies whether to find vertical or horizontal lines. Specifies whether to find vertical or horizontal lines.
line_scale : int, optional (default: 15) line_size_scaling : int, optional (default: 15)
Factor by which the page dimensions will be divided to get Factor by which the page dimensions will be divided to get
smallest length of lines that should be detected. smallest length of lines that should be detected.
@ -91,21 +82,15 @@ def find_lines(
""" """
lines = [] lines = []
if direction == "vertical": if direction == 'vertical':
size = threshold.shape[0] // line_scale size = threshold.shape[0] // line_size_scaling
el = cv2.getStructuringElement(cv2.MORPH_RECT, (1, size)) el = cv2.getStructuringElement(cv2.MORPH_RECT, (1, size))
elif direction == "horizontal": elif direction == 'horizontal':
size = threshold.shape[1] // line_scale size = threshold.shape[1] // line_size_scaling
el = cv2.getStructuringElement(cv2.MORPH_RECT, (size, 1)) el = cv2.getStructuringElement(cv2.MORPH_RECT, (size, 1))
elif direction is None: elif direction is None:
raise ValueError("Specify direction as either 'vertical' or 'horizontal'") raise ValueError("Specify direction as either 'vertical' or"
" 'horizontal'")
if regions is not None:
region_mask = np.zeros(threshold.shape)
for region in regions:
x, y, w, h = region
region_mask[y : y + h, x : x + w] = 1
threshold = np.multiply(threshold, region_mask)
threshold = cv2.erode(threshold, el) threshold = cv2.erode(threshold, el)
threshold = cv2.dilate(threshold, el) threshold = cv2.dilate(threshold, el)
@ -113,27 +98,25 @@ def find_lines(
try: try:
_, contours, _ = cv2.findContours( _, contours, _ = cv2.findContours(
threshold.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE threshold, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
)
except ValueError: except ValueError:
# for opencv backward compatibility # for opencv backward compatibility
contours, _ = cv2.findContours( contours, _ = cv2.findContours(
threshold.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE threshold, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
)
for c in contours: for c in contours:
x, y, w, h = cv2.boundingRect(c) x, y, w, h = cv2.boundingRect(c)
x1, x2 = x, x + w x1, x2 = x, x + w
y1, y2 = y, y + h y1, y2 = y, y + h
if direction == "vertical": if direction == 'vertical':
lines.append(((x1 + x2) // 2, y2, (x1 + x2) // 2, y1)) lines.append(((x1 + x2) // 2, y2, (x1 + x2) // 2, y1))
elif direction == "horizontal": elif direction == 'horizontal':
lines.append((x1, (y1 + y2) // 2, x2, (y1 + y2) // 2)) lines.append((x1, (y1 + y2) // 2, x2, (y1 + y2) // 2))
return dmask, lines return dmask, lines
def find_contours(vertical, horizontal): def find_table_contours(vertical, horizontal):
"""Finds table boundaries using OpenCV's findContours. """Finds table boundaries using OpenCV's findContours.
Parameters Parameters
@ -155,14 +138,11 @@ def find_contours(vertical, horizontal):
try: try:
__, contours, __ = cv2.findContours( __, contours, __ = cv2.findContours(
mask.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
)
except ValueError: except ValueError:
# for opencv backward compatibility # for opencv backward compatibility
contours, __ = cv2.findContours( contours, __ = cv2.findContours(
mask.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
)
# sort in reverse based on contour area and use first 10 contours
contours = sorted(contours, key=cv2.contourArea, reverse=True)[:10] contours = sorted(contours, key=cv2.contourArea, reverse=True)[:10]
cont = [] cont = []
@ -173,7 +153,7 @@ def find_contours(vertical, horizontal):
return cont return cont
def find_joints(contours, vertical, horizontal): def find_table_joints(contours, vertical, horizontal):
"""Finds joints/intersections present inside each table boundary. """Finds joints/intersections present inside each table boundary.
Parameters Parameters
@ -196,20 +176,18 @@ def find_joints(contours, vertical, horizontal):
and (x2, y2) -> rt in image coordinate space. and (x2, y2) -> rt in image coordinate space.
""" """
joints = np.multiply(vertical, horizontal) joints = np.bitwise_and(vertical, horizontal)
tables = {} tables = {}
for c in contours: for c in contours:
x, y, w, h = c x, y, w, h = c
roi = joints[y : y + h, x : x + w] roi = joints[y : y + h, x : x + w]
try: try:
__, jc, __ = cv2.findContours( __, jc, __ = cv2.findContours(
roi.astype(np.uint8), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE roi, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
)
except ValueError: except ValueError:
# for opencv backward compatibility # for opencv backward compatibility
jc, __ = cv2.findContours( jc, __ = cv2.findContours(
roi.astype(np.uint8), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE roi, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
)
if len(jc) <= 4: # remove contours with less than 4 joints if len(jc) <= 4: # remove contours with less than 4 joints
continue continue
joint_coords = [] joint_coords = []

View File

@ -1,20 +1,12 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import warnings import warnings
from .handlers import PDFHandler from .handlers import PDFHandler
from .utils import validate_input, remove_extra from .utils import validate_input, remove_extra
def read_pdf( def read_pdf(filepath, pages='1', password=None, flavor='lattice',
filepath, suppress_stdout=False, layout_kwargs={}, **kwargs):
pages="1",
password=None,
flavor="lattice",
suppress_stdout=False,
layout_kwargs={},
**kwargs
):
"""Read PDF and return extracted tables. """Read PDF and return extracted tables.
Note: kwargs annotated with ^ can only be used with flavor='stream' Note: kwargs annotated with ^ can only be used with flavor='stream'
@ -26,7 +18,7 @@ def read_pdf(
Filepath or URL of the PDF file. Filepath or URL of the PDF file.
pages : str, optional (default: '1') pages : str, optional (default: '1')
Comma-separated page numbers. Comma-separated page numbers.
Example: '1,3,4' or '1,4-end' or 'all'. Example: '1,3,4' or '1,4-end'.
password : str, optional (default: None) password : str, optional (default: None)
Password for decryption. Password for decryption.
flavor : str (default: 'lattice') flavor : str (default: 'lattice')
@ -59,7 +51,7 @@ def read_pdf(
to generate columns. to generate columns.
process_background* : bool, optional (default: False) process_background* : bool, optional (default: False)
Process background lines. Process background lines.
line_scale* : int, optional (default: 15) line_size_scaling* : int, optional (default: 15)
Line size scaling factor. The larger the value the smaller Line size scaling factor. The larger the value the smaller
the detected lines. Making it very large will lead to text the detected lines. Making it very large will lead to text
being detected as lines. being detected as lines.
@ -98,10 +90,9 @@ def read_pdf(
tables : camelot.core.TableList tables : camelot.core.TableList
""" """
if flavor not in ["lattice", "stream"]: if flavor not in ['lattice', 'stream']:
raise NotImplementedError( raise NotImplementedError("Unknown flavor specified."
"Unknown flavor specified." " Use either 'lattice' or 'stream'" " Use either 'lattice' or 'stream'")
)
with warnings.catch_warnings(): with warnings.catch_warnings():
if suppress_stdout: if suppress_stdout:
@ -110,10 +101,6 @@ def read_pdf(
validate_input(kwargs, flavor=flavor) validate_input(kwargs, flavor=flavor)
p = PDFHandler(filepath, pages=pages, password=password) p = PDFHandler(filepath, pages=pages, password=password)
kwargs = remove_extra(kwargs, flavor=flavor) kwargs = remove_extra(kwargs, flavor=flavor)
tables = p.parse( tables = p.parse(flavor=flavor, suppress_stdout=suppress_stdout,
flavor=flavor, layout_kwargs=layout_kwargs, **kwargs)
suppress_stdout=suppress_stdout,
layout_kwargs=layout_kwargs,
**kwargs
)
return tables return tables

View File

@ -6,15 +6,14 @@ from ..utils import get_page_layout, get_text_objects
class BaseParser(object): class BaseParser(object):
"""Defines a base parser.""" """Defines a base parser.
"""
def _generate_layout(self, filename, layout_kwargs): def _generate_layout(self, filename, layout_kwargs):
self.filename = filename self.filename = filename
self.layout_kwargs = layout_kwargs self.layout_kwargs = layout_kwargs
self.layout, self.dimensions = get_page_layout(filename, **layout_kwargs) self.layout, self.dimensions = get_page_layout(
self.images = get_text_objects(self.layout, ltype="image") filename, **layout_kwargs)
self.horizontal_text = get_text_objects(self.layout, ltype="horizontal_text") self.horizontal_text = get_text_objects(self.layout, ltype="lh")
self.vertical_text = get_text_objects(self.layout, ltype="vertical_text") self.vertical_text = get_text_objects(self.layout, ltype="lv")
self.pdf_width, self.pdf_height = self.dimensions self.pdf_width, self.pdf_height = self.dimensions
self.rootname, __ = os.path.splitext(self.filename) self.rootname, __ = os.path.splitext(self.filename)
self.imagename = "".join([self.rootname, ".png"])

View File

@ -1,37 +1,25 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import division
import os import os
import sys
import copy import copy
import locale
import logging import logging
import warnings import warnings
import subprocess
import numpy as np import numpy as np
import pandas as pd import pandas as pd
from .base import BaseParser from .base import BaseParser
from ..core import Table from ..core import Table
from ..utils import ( from ..utils import (scale_image, scale_pdf, segments_in_bbox, text_in_bbox,
scale_image, merge_close_lines, get_table_index, compute_accuracy,
scale_pdf, compute_whitespace)
segments_in_bbox, from ..image_processing import (adaptive_threshold, find_lines,
text_in_bbox, find_table_contours, find_table_joints)
merge_close_lines,
get_table_index,
compute_accuracy,
compute_whitespace,
)
from ..image_processing import (
adaptive_threshold,
find_lines,
find_contours,
find_joints,
)
from ..backends.image_conversion import BACKENDS
logger = logging.getLogger("camelot") logger = logging.getLogger('camelot')
class Lattice(BaseParser): class Lattice(BaseParser):
@ -40,17 +28,13 @@ class Lattice(BaseParser):
Parameters Parameters
---------- ----------
table_regions : list, optional (default: None)
List of page regions that may contain tables of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in PDF coordinate space.
table_areas : list, optional (default: None) table_areas : list, optional (default: None)
List of table area strings of the form x1,y1,x2,y2 List of table area strings of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in PDF coordinate space. in PDF coordinate space.
process_background : bool, optional (default: False) process_background : bool, optional (default: False)
Process background lines. Process background lines.
line_scale : int, optional (default: 15) line_size_scaling : int, optional (default: 15)
Line size scaling factor. The larger the value the smaller Line size scaling factor. The larger the value the smaller
the detected lines. Making it very large will lead to text the detected lines. Making it very large will lead to text
being detected as lines. being detected as lines.
@ -93,31 +77,14 @@ class Lattice(BaseParser):
Resolution used for PDF to PNG conversion. Resolution used for PDF to PNG conversion.
""" """
def __init__(self, table_areas=None, process_background=False,
def __init__( line_size_scaling=15, copy_text=None, shift_text=['l', 't'],
self, split_text=False, flag_size=False, strip_text='', line_tol=2,
table_regions=None, joint_tol=2, threshold_blocksize=15, threshold_constant=-2,
table_areas=None, iterations=0, resolution=300, **kwargs):
process_background=False,
line_scale=15,
copy_text=None,
shift_text=["l", "t"],
split_text=False,
flag_size=False,
strip_text="",
line_tol=2,
joint_tol=2,
threshold_blocksize=15,
threshold_constant=-2,
iterations=0,
resolution=300,
backend="ghostscript",
**kwargs,
):
self.table_regions = table_regions
self.table_areas = table_areas self.table_areas = table_areas
self.process_background = process_background self.process_background = process_background
self.line_scale = line_scale self.line_size_scaling = line_size_scaling
self.copy_text = copy_text self.copy_text = copy_text
self.shift_text = shift_text self.shift_text = shift_text
self.split_text = split_text self.split_text = split_text
@ -129,37 +96,6 @@ class Lattice(BaseParser):
self.threshold_constant = threshold_constant self.threshold_constant = threshold_constant
self.iterations = iterations self.iterations = iterations
self.resolution = resolution self.resolution = resolution
self.backend = Lattice._get_backend(backend)
@staticmethod
def _get_backend(backend):
def implements_convert():
methods = [
method for method in dir(backend) if method.startswith("__") is False
]
return "convert" in methods
if isinstance(backend, str):
if backend not in BACKENDS.keys():
raise NotImplementedError(
f"Unknown backend '{backend}' specified. Please use either 'poppler' or 'ghostscript'."
)
if backend == "ghostscript":
warnings.warn(
"'ghostscript' will be replaced by 'poppler' as the default image conversion"
" backend in v0.12.0. You can try out 'poppler' with backend='poppler'.",
DeprecationWarning,
)
return BACKENDS[backend]()
else:
if not implements_convert():
raise NotImplementedError(
f"'{backend}' must implement a 'convert' method"
)
return backend
@staticmethod @staticmethod
def _reduce_index(t, idx, shift_text): def _reduce_index(t, idx, shift_text):
@ -187,19 +123,19 @@ class Lattice(BaseParser):
indices = [] indices = []
for r_idx, c_idx, text in idx: for r_idx, c_idx, text in idx:
for d in shift_text: for d in shift_text:
if d == "l": if d == 'l':
if t.cells[r_idx][c_idx].hspan: if t.cells[r_idx][c_idx].hspan:
while not t.cells[r_idx][c_idx].left: while not t.cells[r_idx][c_idx].left:
c_idx -= 1 c_idx -= 1
if d == "r": if d == 'r':
if t.cells[r_idx][c_idx].hspan: if t.cells[r_idx][c_idx].hspan:
while not t.cells[r_idx][c_idx].right: while not t.cells[r_idx][c_idx].right:
c_idx += 1 c_idx += 1
if d == "t": if d == 't':
if t.cells[r_idx][c_idx].vspan: if t.cells[r_idx][c_idx].vspan:
while not t.cells[r_idx][c_idx].top: while not t.cells[r_idx][c_idx].top:
r_idx -= 1 r_idx -= 1
if d == "b": if d == 'b':
if t.cells[r_idx][c_idx].vspan: if t.cells[r_idx][c_idx].vspan:
while not t.cells[r_idx][c_idx].bottom: while not t.cells[r_idx][c_idx].bottom:
r_idx += 1 r_idx += 1
@ -228,37 +164,72 @@ class Lattice(BaseParser):
if f == "h": if f == "h":
for i in range(len(t.cells)): for i in range(len(t.cells)):
for j in range(len(t.cells[i])): for j in range(len(t.cells[i])):
if t.cells[i][j].text.strip() == "": if t.cells[i][j].text.strip() == '':
if t.cells[i][j].hspan and not t.cells[i][j].left: if t.cells[i][j].hspan and not t.cells[i][j].left:
t.cells[i][j].text = t.cells[i][j - 1].text t.cells[i][j].text = t.cells[i][j - 1].text
elif f == "v": elif f == "v":
for i in range(len(t.cells)): for i in range(len(t.cells)):
for j in range(len(t.cells[i])): for j in range(len(t.cells[i])):
if t.cells[i][j].text.strip() == "": if t.cells[i][j].text.strip() == '':
if t.cells[i][j].vspan and not t.cells[i][j].top: if t.cells[i][j].vspan and not t.cells[i][j].top:
t.cells[i][j].text = t.cells[i - 1][j].text t.cells[i][j].text = t.cells[i - 1][j].text
return t return t
def _generate_table_bbox(self): def _generate_image(self):
def scale_areas(areas): # TODO: get rid of ghostscript #96
scaled_areas = [] def get_executable():
for area in areas: import platform
x1, y1, x2, y2 = area.split(",") from distutils.spawn import find_executable
x1 = float(x1)
y1 = float(y1)
x2 = float(x2)
y2 = float(y2)
x1, y1, x2, y2 = scale_pdf((x1, y1, x2, y2), image_scalers)
scaled_areas.append((x1, y1, abs(x2 - x1), abs(y2 - y1)))
return scaled_areas
self.image, self.threshold = adaptive_threshold( class GhostscriptNotFound(Exception): pass
gs = None
system = platform.system().lower()
try:
if system == 'windows':
if find_executable('gswin32c.exe'):
gs = 'gswin32c.exe'
elif find_executable('gswin64c.exe'):
gs = 'gswin64c.exe'
else:
raise ValueError
else:
if find_executable('gs'):
gs = 'gs'
elif find_executable('gsc'):
gs = 'gsc'
else:
raise ValueError
if 'ghostscript' not in subprocess.check_output(
[gs, '-version']).decode('utf-8').lower():
raise ValueError
except ValueError:
raise GhostscriptNotFound(
'Please make sure that Ghostscript is installed'
' and available on the PATH environment variable')
return gs
self.imagename = ''.join([self.rootname, '.png'])
gs_call = [
'-q',
'-sDEVICE=png16m',
'-o',
self.imagename, self.imagename,
process_background=self.process_background, '-r{}'.format(self.resolution),
blocksize=self.threshold_blocksize, self.filename
c=self.threshold_constant, ]
) gs = get_executable()
gs_call.insert(0, gs)
subprocess.call(
gs_call, stdout=open(os.devnull, 'w'),
stderr=subprocess.STDOUT)
def _generate_table_bbox(self):
self.image, self.threshold = adaptive_threshold(
self.imagename, process_background=self.process_background,
blocksize=self.threshold_blocksize, c=self.threshold_constant)
image_width = self.image.shape[1] image_width = self.image.shape[1]
image_height = self.image.shape[0] image_height = self.image.shape[0]
image_width_scaler = image_width / float(self.pdf_width) image_width_scaler = image_width / float(self.pdf_width)
@ -268,62 +239,43 @@ class Lattice(BaseParser):
image_scalers = (image_width_scaler, image_height_scaler, self.pdf_height) image_scalers = (image_width_scaler, image_height_scaler, self.pdf_height)
pdf_scalers = (pdf_width_scaler, pdf_height_scaler, image_height) pdf_scalers = (pdf_width_scaler, pdf_height_scaler, image_height)
if self.table_areas is None: vertical_mask, vertical_segments = find_lines(
regions = None self.threshold, direction='vertical',
if self.table_regions is not None: line_size_scaling=self.line_size_scaling, iterations=self.iterations)
regions = scale_areas(self.table_regions) horizontal_mask, horizontal_segments = find_lines(
self.threshold, direction='horizontal',
line_size_scaling=self.line_size_scaling, iterations=self.iterations)
vertical_mask, vertical_segments = find_lines( if self.table_areas is not None:
self.threshold, areas = []
regions=regions, for area in self.table_areas:
direction="vertical", x1, y1, x2, y2 = area.split(",")
line_scale=self.line_scale, x1 = float(x1)
iterations=self.iterations, y1 = float(y1)
) x2 = float(x2)
horizontal_mask, horizontal_segments = find_lines( y2 = float(y2)
self.threshold, x1, y1, x2, y2 = scale_pdf((x1, y1, x2, y2), image_scalers)
regions=regions, areas.append((x1, y1, abs(x2 - x1), abs(y2 - y1)))
direction="horizontal", table_bbox = find_table_joints(areas, vertical_mask, horizontal_mask)
line_scale=self.line_scale,
iterations=self.iterations,
)
contours = find_contours(vertical_mask, horizontal_mask)
table_bbox = find_joints(contours, vertical_mask, horizontal_mask)
else: else:
vertical_mask, vertical_segments = find_lines( contours = find_table_contours(vertical_mask, horizontal_mask)
self.threshold, table_bbox = find_table_joints(contours, vertical_mask, horizontal_mask)
direction="vertical",
line_scale=self.line_scale,
iterations=self.iterations,
)
horizontal_mask, horizontal_segments = find_lines(
self.threshold,
direction="horizontal",
line_scale=self.line_scale,
iterations=self.iterations,
)
areas = scale_areas(self.table_areas)
table_bbox = find_joints(areas, vertical_mask, horizontal_mask)
self.table_bbox_unscaled = copy.deepcopy(table_bbox) self.table_bbox_unscaled = copy.deepcopy(table_bbox)
self.table_bbox, self.vertical_segments, self.horizontal_segments = scale_image( self.table_bbox, self.vertical_segments, self.horizontal_segments = scale_image(
table_bbox, vertical_segments, horizontal_segments, pdf_scalers table_bbox, vertical_segments, horizontal_segments, pdf_scalers)
)
def _generate_columns_and_rows(self, table_idx, tk): def _generate_columns_and_rows(self, table_idx, tk):
# select elements which lie within table_bbox # select elements which lie within table_bbox
t_bbox = {} t_bbox = {}
v_s, h_s = segments_in_bbox( v_s, h_s = segments_in_bbox(
tk, self.vertical_segments, self.horizontal_segments tk, self.vertical_segments, self.horizontal_segments)
) t_bbox['horizontal'] = text_in_bbox(tk, self.horizontal_text)
t_bbox["horizontal"] = text_in_bbox(tk, self.horizontal_text) t_bbox['vertical'] = text_in_bbox(tk, self.vertical_text)
t_bbox["vertical"] = text_in_bbox(tk, self.vertical_text)
t_bbox["horizontal"].sort(key=lambda x: (-x.y0, x.x0)) t_bbox['horizontal'].sort(key=lambda x: (-x.y0, x.x0))
t_bbox["vertical"].sort(key=lambda x: (x.x0, -x.y0)) t_bbox['vertical'].sort(key=lambda x: (x.x0, -x.y0))
self.t_bbox = t_bbox self.t_bbox = t_bbox
@ -332,19 +284,23 @@ class Lattice(BaseParser):
cols.extend([tk[0], tk[2]]) cols.extend([tk[0], tk[2]])
rows.extend([tk[1], tk[3]]) rows.extend([tk[1], tk[3]])
# sort horizontal and vertical segments # sort horizontal and vertical segments
cols = merge_close_lines(sorted(cols), line_tol=self.line_tol) cols = merge_close_lines(
rows = merge_close_lines(sorted(rows, reverse=True), line_tol=self.line_tol) sorted(cols), line_tol=self.line_tol)
rows = merge_close_lines(
sorted(rows, reverse=True), line_tol=self.line_tol)
# make grid using x and y coord of shortlisted rows and cols # make grid using x and y coord of shortlisted rows and cols
cols = [(cols[i], cols[i + 1]) for i in range(0, len(cols) - 1)] cols = [(cols[i], cols[i + 1])
rows = [(rows[i], rows[i + 1]) for i in range(0, len(rows) - 1)] for i in range(0, len(cols) - 1)]
rows = [(rows[i], rows[i + 1])
for i in range(0, len(rows) - 1)]
return cols, rows, v_s, h_s return cols, rows, v_s, h_s
def _generate_table(self, table_idx, cols, rows, **kwargs): def _generate_table(self, table_idx, cols, rows, **kwargs):
v_s = kwargs.get("v_s") v_s = kwargs.get('v_s')
h_s = kwargs.get("h_s") h_s = kwargs.get('h_s')
if v_s is None or h_s is None: if v_s is None or h_s is None:
raise ValueError("No segments found on {}".format(self.rootname)) raise ValueError('No segments found on {}'.format(self.rootname))
table = Table(cols, rows) table = Table(cols, rows)
# set table edges to True using ver+hor lines # set table edges to True using ver+hor lines
@ -357,21 +313,14 @@ class Lattice(BaseParser):
pos_errors = [] pos_errors = []
# TODO: have a single list in place of two directional ones? # TODO: have a single list in place of two directional ones?
# sorted on x-coordinate based on reading order i.e. LTR or RTL # sorted on x-coordinate based on reading order i.e. LTR or RTL
for direction in ["vertical", "horizontal"]: for direction in ['vertical', 'horizontal']:
for t in self.t_bbox[direction]: for t in self.t_bbox[direction]:
indices, error = get_table_index( indices, error = get_table_index(
table, table, t, direction, split_text=self.split_text,
t, flag_size=self.flag_size, strip_text=self.strip_text)
direction,
split_text=self.split_text,
flag_size=self.flag_size,
strip_text=self.strip_text,
)
if indices[:2] != (-1, -1): if indices[:2] != (-1, -1):
pos_errors.append(error) pos_errors.append(error)
indices = Lattice._reduce_index( indices = Lattice._reduce_index(table, indices, shift_text=self.shift_text)
table, indices, shift_text=self.shift_text
)
for r_idx, c_idx, text in indices: for r_idx, c_idx, text in indices:
table.cells[r_idx][c_idx].text = text table.cells[r_idx][c_idx].text = text
accuracy = compute_accuracy([[100, pos_errors]]) accuracy = compute_accuracy([[100, pos_errors]])
@ -384,11 +333,11 @@ class Lattice(BaseParser):
table.shape = table.df.shape table.shape = table.df.shape
whitespace = compute_whitespace(data) whitespace = compute_whitespace(data)
table.flavor = "lattice" table.flavor = 'lattice'
table.accuracy = accuracy table.accuracy = accuracy
table.whitespace = whitespace table.whitespace = whitespace
table.order = table_idx + 1 table.order = table_idx + 1
table.page = int(os.path.basename(self.rootname).replace("page-", "")) table.page = int(os.path.basename(self.rootname).replace('page-', ''))
# for plotting # for plotting
_text = [] _text = []
@ -404,29 +353,20 @@ class Lattice(BaseParser):
def extract_tables(self, filename, suppress_stdout=False, layout_kwargs={}): def extract_tables(self, filename, suppress_stdout=False, layout_kwargs={}):
self._generate_layout(filename, layout_kwargs) self._generate_layout(filename, layout_kwargs)
if not suppress_stdout: if not suppress_stdout:
logger.info("Processing {}".format(os.path.basename(self.rootname))) logger.info('Processing {}'.format(os.path.basename(self.rootname)))
if not self.horizontal_text: if not self.horizontal_text:
if self.images: warnings.warn("No tables found on {}".format(
warnings.warn( os.path.basename(self.rootname)))
"{} is image-based, camelot only works on"
" text-based pages.".format(os.path.basename(self.rootname))
)
else:
warnings.warn(
"No tables found on {}".format(os.path.basename(self.rootname))
)
return [] return []
self.backend.convert(self.filename, self.imagename) self._generate_image()
self._generate_table_bbox() self._generate_table_bbox()
_tables = [] _tables = []
# sort tables based on y-coord # sort tables based on y-coord
for table_idx, tk in enumerate( for table_idx, tk in enumerate(sorted(
sorted(self.table_bbox.keys(), key=lambda x: x[1], reverse=True) self.table_bbox.keys(), key=lambda x: x[1], reverse=True)):
):
cols, rows, v_s, h_s = self._generate_columns_and_rows(table_idx, tk) cols, rows, v_s, h_s = self._generate_columns_and_rows(table_idx, tk)
table = self._generate_table(table_idx, cols, rows, v_s=v_s, h_s=h_s) table = self._generate_table(table_idx, cols, rows, v_s=v_s, h_s=h_s)
table._bbox = tk table._bbox = tk

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import division
import os import os
import logging import logging
import warnings import warnings
@ -9,10 +10,11 @@ import pandas as pd
from .base import BaseParser from .base import BaseParser
from ..core import TextEdges, Table from ..core import TextEdges, Table
from ..utils import text_in_bbox, get_table_index, compute_accuracy, compute_whitespace from ..utils import (text_in_bbox, get_table_index, compute_accuracy,
compute_whitespace)
logger = logging.getLogger("camelot") logger = logging.getLogger('camelot')
class Stream(BaseParser): class Stream(BaseParser):
@ -24,10 +26,6 @@ class Stream(BaseParser):
Parameters Parameters
---------- ----------
table_regions : list, optional (default: None)
List of page regions that may contain tables of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in PDF coordinate space.
table_areas : list, optional (default: None) table_areas : list, optional (default: None)
List of table area strings of the form x1,y1,x2,y2 List of table area strings of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom where (x1, y1) -> left-top and (x2, y2) -> right-bottom
@ -53,21 +51,9 @@ class Stream(BaseParser):
to generate columns. to generate columns.
""" """
def __init__(self, table_areas=None, columns=None, split_text=False,
def __init__( flag_size=False, strip_text='', edge_tol=50, row_tol=2,
self, column_tol=0, **kwargs):
table_regions=None,
table_areas=None,
columns=None,
split_text=False,
flag_size=False,
strip_text="",
edge_tol=50,
row_tol=2,
column_tol=0,
**kwargs,
):
self.table_regions = table_regions
self.table_areas = table_areas self.table_areas = table_areas
self.columns = columns self.columns = columns
self._validate_columns() self._validate_columns()
@ -121,7 +107,6 @@ class Stream(BaseParser):
row_y = 0 row_y = 0
rows = [] rows = []
temp = [] temp = []
for t in text: for t in text:
# is checking for upright necessary? # is checking for upright necessary?
# if t.get_text().strip() and all([obj.upright for obj in t._objs if # if t.get_text().strip() and all([obj.upright for obj in t._objs if
@ -132,10 +117,8 @@ class Stream(BaseParser):
temp = [] temp = []
row_y = t.y0 row_y = t.y0
temp.append(t) temp.append(t)
rows.append(sorted(temp, key=lambda t: t.x0)) rows.append(sorted(temp, key=lambda t: t.x0))
if len(rows) > 1: __ = rows.pop(0) # TODO: hacky
__ = rows.pop(0) # TODO: hacky
return rows return rows
@staticmethod @staticmethod
@ -162,9 +145,8 @@ class Stream(BaseParser):
else: else:
lower = merged[-1] lower = merged[-1]
if column_tol >= 0: if column_tol >= 0:
if higher[0] <= lower[1] or np.isclose( if (higher[0] <= lower[1] or
higher[0], lower[1], atol=column_tol np.isclose(higher[0], lower[1], atol=column_tol)):
):
upper_bound = max(lower[1], higher[1]) upper_bound = max(lower[1], higher[1])
lower_bound = min(lower[0], higher[0]) lower_bound = min(lower[0], higher[0])
merged[-1] = (lower_bound, upper_bound) merged[-1] = (lower_bound, upper_bound)
@ -199,14 +181,13 @@ class Stream(BaseParser):
List of continuous row y-coordinate tuples. List of continuous row y-coordinate tuples.
""" """
row_mids = [ row_mids = [sum([(t.y0 + t.y1) / 2 for t in r]) / len(r)
sum([(t.y0 + t.y1) / 2 for t in r]) / len(r) if len(r) > 0 else 0 if len(r) > 0 else 0 for r in rows_grouped]
for r in rows_grouped
]
rows = [(row_mids[i] + row_mids[i - 1]) / 2 for i in range(1, len(row_mids))] rows = [(row_mids[i] + row_mids[i - 1]) / 2 for i in range(1, len(row_mids))]
rows.insert(0, text_y_max) rows.insert(0, text_y_max)
rows.append(text_y_min) rows.append(text_y_min)
rows = [(rows[i], rows[i + 1]) for i in range(0, len(rows) - 1)] rows = [(rows[i], rows[i + 1])
for i in range(0, len(rows) - 1)]
return rows return rows
@staticmethod @staticmethod
@ -231,9 +212,8 @@ class Stream(BaseParser):
if text: if text:
text = Stream._group_rows(text, row_tol=row_tol) text = Stream._group_rows(text, row_tol=row_tol)
elements = [len(r) for r in text] elements = [len(r) for r in text]
new_cols = [ new_cols = [(t.x0, t.x1)
(t.x0, t.x1) for r in text if len(r) == max(elements) for t in r for r in text if len(r) == max(elements) for t in r]
]
cols.extend(Stream._merge_columns(sorted(new_cols))) cols.extend(Stream._merge_columns(sorted(new_cols)))
return cols return cols
@ -258,13 +238,15 @@ class Stream(BaseParser):
cols = [(cols[i][0] + cols[i - 1][1]) / 2 for i in range(1, len(cols))] cols = [(cols[i][0] + cols[i - 1][1]) / 2 for i in range(1, len(cols))]
cols.insert(0, text_x_min) cols.insert(0, text_x_min)
cols.append(text_x_max) cols.append(text_x_max)
cols = [(cols[i], cols[i + 1]) for i in range(0, len(cols) - 1)] cols = [(cols[i], cols[i + 1])
for i in range(0, len(cols) - 1)]
return cols return cols
def _validate_columns(self): def _validate_columns(self):
if self.table_areas is not None and self.columns is not None: if self.table_areas is not None and self.columns is not None:
if len(self.table_areas) != len(self.columns): if len(self.table_areas) != len(self.columns):
raise ValueError("Length of table_areas and columns" " should be equal") raise ValueError("Length of table_areas and columns"
" should be equal")
def _nurminen_table_detection(self, textlines): def _nurminen_table_detection(self, textlines):
"""A general implementation of the table detection algorithm """A general implementation of the table detection algorithm
@ -293,22 +275,7 @@ class Stream(BaseParser):
def _generate_table_bbox(self): def _generate_table_bbox(self):
self.textedges = [] self.textedges = []
if self.table_areas is None: if self.table_areas is not None:
hor_text = self.horizontal_text
if self.table_regions is not None:
# filter horizontal text
hor_text = []
for region in self.table_regions:
x1, y1, x2, y2 = region.split(",")
x1 = float(x1)
y1 = float(y1)
x2 = float(x2)
y2 = float(y2)
region_text = text_in_bbox((x1, y2, x2, y1), self.horizontal_text)
hor_text.extend(region_text)
# find tables based on nurminen's detection algorithm
table_bbox = self._nurminen_table_detection(hor_text)
else:
table_bbox = {} table_bbox = {}
for area in self.table_areas: for area in self.table_areas:
x1, y1, x2, y2 = area.split(",") x1, y1, x2, y2 = area.split(",")
@ -317,21 +284,24 @@ class Stream(BaseParser):
x2 = float(x2) x2 = float(x2)
y2 = float(y2) y2 = float(y2)
table_bbox[(x1, y2, x2, y1)] = None table_bbox[(x1, y2, x2, y1)] = None
else:
# find tables based on nurminen's detection algorithm
table_bbox = self._nurminen_table_detection(self.horizontal_text)
self.table_bbox = table_bbox self.table_bbox = table_bbox
def _generate_columns_and_rows(self, table_idx, tk): def _generate_columns_and_rows(self, table_idx, tk):
# select elements which lie within table_bbox # select elements which lie within table_bbox
t_bbox = {} t_bbox = {}
t_bbox["horizontal"] = text_in_bbox(tk, self.horizontal_text) t_bbox['horizontal'] = text_in_bbox(tk, self.horizontal_text)
t_bbox["vertical"] = text_in_bbox(tk, self.vertical_text) t_bbox['vertical'] = text_in_bbox(tk, self.vertical_text)
t_bbox["horizontal"].sort(key=lambda x: (-x.y0, x.x0)) t_bbox['horizontal'].sort(key=lambda x: (-x.y0, x.x0))
t_bbox["vertical"].sort(key=lambda x: (x.x0, -x.y0)) t_bbox['vertical'].sort(key=lambda x: (x.x0, -x.y0))
self.t_bbox = t_bbox self.t_bbox = t_bbox
text_x_min, text_y_min, text_x_max, text_y_max = self._text_bbox(self.t_bbox) text_x_min, text_y_min, text_x_max, text_y_max = self._text_bbox(self.t_bbox)
rows_grouped = self._group_rows(self.t_bbox["horizontal"], row_tol=self.row_tol) rows_grouped = self._group_rows(self.t_bbox['horizontal'], row_tol=self.row_tol)
rows = self._join_rows(rows_grouped, text_y_max, text_y_min) rows = self._join_rows(rows_grouped, text_y_max, text_y_min)
elements = [len(r) for r in rows_grouped] elements = [len(r) for r in rows_grouped]
@ -340,7 +310,7 @@ class Stream(BaseParser):
# take (0, pdf_width) by default # take (0, pdf_width) by default
# similar to else condition # similar to else condition
# len can't be 1 # len can't be 1
cols = self.columns[table_idx].split(",") cols = self.columns[table_idx].split(',')
cols = [float(c) for c in cols] cols = [float(c) for c in cols]
cols.insert(0, text_x_min) cols.insert(0, text_x_min)
cols.append(text_x_max) cols.append(text_x_max)
@ -348,46 +318,34 @@ class Stream(BaseParser):
else: else:
# calculate mode of the list of number of elements in # calculate mode of the list of number of elements in
# each row to guess the number of columns # each row to guess the number of columns
if not len(elements): ncols = max(set(elements), key=elements.count)
cols = [(text_x_min, text_x_max)] if ncols == 1:
else: # if mode is 1, the page usually contains not tables
ncols = max(set(elements), key=elements.count) # but there can be cases where the list can be skewed,
if ncols == 1: # try to remove all 1s from list in this case and
# if mode is 1, the page usually contains not tables # see if the list contains elements, if yes, then use
# but there can be cases where the list can be skewed, # the mode after removing 1s
# try to remove all 1s from list in this case and elements = list(filter(lambda x: x != 1, elements))
# see if the list contains elements, if yes, then use if len(elements):
# the mode after removing 1s ncols = max(set(elements), key=elements.count)
elements = list(filter(lambda x: x != 1, elements)) else:
if len(elements): warnings.warn("No tables found in table area {}".format(
ncols = max(set(elements), key=elements.count) table_idx + 1))
else: cols = [(t.x0, t.x1) for r in rows_grouped if len(r) == ncols for t in r]
warnings.warn(f"No tables found in table area {table_idx + 1}") cols = self._merge_columns(sorted(cols), column_tol=self.column_tol)
cols = [ inner_text = []
(t.x0, t.x1) for r in rows_grouped if len(r) == ncols for t in r for i in range(1, len(cols)):
] left = cols[i - 1][1]
cols = self._merge_columns(sorted(cols), column_tol=self.column_tol) right = cols[i][0]
inner_text = [] inner_text.extend([t for direction in self.t_bbox
for i in range(1, len(cols)): for t in self.t_bbox[direction]
left = cols[i - 1][1] if t.x0 > left and t.x1 < right])
right = cols[i][0] outer_text = [t for direction in self.t_bbox
inner_text.extend(
[
t
for direction in self.t_bbox
for t in self.t_bbox[direction] for t in self.t_bbox[direction]
if t.x0 > left and t.x1 < right if t.x0 > cols[-1][1] or t.x1 < cols[0][0]]
] inner_text.extend(outer_text)
) cols = self._add_columns(cols, inner_text, self.row_tol)
outer_text = [ cols = self._join_columns(cols, text_x_min, text_x_max)
t
for direction in self.t_bbox
for t in self.t_bbox[direction]
if t.x0 > cols[-1][1] or t.x1 < cols[0][0]
]
inner_text.extend(outer_text)
cols = self._add_columns(cols, inner_text, self.row_tol)
cols = self._join_columns(cols, text_x_min, text_x_max)
return cols, rows return cols, rows
@ -398,16 +356,11 @@ class Stream(BaseParser):
pos_errors = [] pos_errors = []
# TODO: have a single list in place of two directional ones? # TODO: have a single list in place of two directional ones?
# sorted on x-coordinate based on reading order i.e. LTR or RTL # sorted on x-coordinate based on reading order i.e. LTR or RTL
for direction in ["vertical", "horizontal"]: for direction in ['vertical', 'horizontal']:
for t in self.t_bbox[direction]: for t in self.t_bbox[direction]:
indices, error = get_table_index( indices, error = get_table_index(
table, table, t, direction, split_text=self.split_text,
t, flag_size=self.flag_size, strip_text=self.strip_text)
direction,
split_text=self.split_text,
flag_size=self.flag_size,
strip_text=self.strip_text,
)
if indices[:2] != (-1, -1): if indices[:2] != (-1, -1):
pos_errors.append(error) pos_errors.append(error)
for r_idx, c_idx, text in indices: for r_idx, c_idx, text in indices:
@ -419,11 +372,11 @@ class Stream(BaseParser):
table.shape = table.df.shape table.shape = table.df.shape
whitespace = compute_whitespace(data) whitespace = compute_whitespace(data)
table.flavor = "stream" table.flavor = 'stream'
table.accuracy = accuracy table.accuracy = accuracy
table.whitespace = whitespace table.whitespace = whitespace
table.order = table_idx + 1 table.order = table_idx + 1
table.page = int(os.path.basename(self.rootname).replace("page-", "")) table.page = int(os.path.basename(self.rootname).replace('page-', ''))
# for plotting # for plotting
_text = [] _text = []
@ -438,28 +391,20 @@ class Stream(BaseParser):
def extract_tables(self, filename, suppress_stdout=False, layout_kwargs={}): def extract_tables(self, filename, suppress_stdout=False, layout_kwargs={}):
self._generate_layout(filename, layout_kwargs) self._generate_layout(filename, layout_kwargs)
base_filename = os.path.basename(self.rootname)
if not suppress_stdout: if not suppress_stdout:
logger.info(f"Processing {base_filename}") logger.info('Processing {}'.format(os.path.basename(self.rootname)))
if not self.horizontal_text: if not self.horizontal_text:
if self.images: warnings.warn("No tables found on {}".format(
warnings.warn( os.path.basename(self.rootname)))
f"{base_filename} is image-based, camelot only works on"
" text-based pages."
)
else:
warnings.warn(f"No tables found on {base_filename}")
return [] return []
self._generate_table_bbox() self._generate_table_bbox()
_tables = [] _tables = []
# sort tables based on y-coord # sort tables based on y-coord
for table_idx, tk in enumerate( for table_idx, tk in enumerate(sorted(
sorted(self.table_bbox.keys(), key=lambda x: x[1], reverse=True) self.table_bbox.keys(), key=lambda x: x[1], reverse=True)):
):
cols, rows = self._generate_columns_and_rows(table_idx, tk) cols, rows = self._generate_columns_and_rows(table_idx, tk)
table = self._generate_table(table_idx, cols, rows) table = self._generate_table(table_idx, cols, rows)
table._bbox = tk table._bbox = tk

View File

@ -10,7 +10,7 @@ else:
class PlotMethods(object): class PlotMethods(object):
def __call__(self, table, kind="text", filename=None): def __call__(self, table, kind='text', filename=None):
"""Plot elements found on PDF page based on kind """Plot elements found on PDF page based on kind
specified, useful for debugging and playing with different specified, useful for debugging and playing with different
parameters to get the best output. parameters to get the best output.
@ -31,21 +31,17 @@ class PlotMethods(object):
""" """
if not _HAS_MPL: if not _HAS_MPL:
raise ImportError("matplotlib is required for plotting.") raise ImportError('matplotlib is required for plotting.')
if table.flavor == "lattice" and kind in ["textedge"]: if table.flavor == 'lattice' and kind in ['textedge']:
raise NotImplementedError(f"Lattice flavor does not support kind='{kind}'") raise NotImplementedError("Lattice flavor does not support kind='{}'".format(
elif table.flavor == "stream" and kind in ["joint", "line"]: kind))
raise NotImplementedError(f"Stream flavor does not support kind='{kind}'") elif table.flavor == 'stream' and kind in ['joint', 'line']:
raise NotImplementedError("Stream flavor does not support kind='{}'".format(
kind))
plot_method = getattr(self, kind) plot_method = getattr(self, kind)
fig = plot_method(table) return plot_method(table)
if filename is not None:
fig.savefig(filename)
return None
return fig
def text(self, table): def text(self, table):
"""Generates a plot for all text elements present """Generates a plot for all text elements present
@ -61,12 +57,18 @@ class PlotMethods(object):
""" """
fig = plt.figure() fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal") ax = fig.add_subplot(111, aspect='equal')
xs, ys = [], [] xs, ys = [], []
for t in table._text: for t in table._text:
xs.extend([t[0], t[2]]) xs.extend([t[0], t[2]])
ys.extend([t[1], t[3]]) ys.extend([t[1], t[3]])
ax.add_patch(patches.Rectangle((t[0], t[1]), t[2] - t[0], t[3] - t[1])) ax.add_patch(
patches.Rectangle(
(t[0], t[1]),
t[2] - t[0],
t[3] - t[1]
)
)
ax.set_xlim(min(xs) - 10, max(xs) + 10) ax.set_xlim(min(xs) - 10, max(xs) + 10)
ax.set_ylim(min(ys) - 10, max(ys) + 10) ax.set_ylim(min(ys) - 10, max(ys) + 10)
return fig return fig
@ -85,17 +87,21 @@ class PlotMethods(object):
""" """
fig = plt.figure() fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal") ax = fig.add_subplot(111, aspect='equal')
for row in table.cells: for row in table.cells:
for cell in row: for cell in row:
if cell.left: if cell.left:
ax.plot([cell.lb[0], cell.lt[0]], [cell.lb[1], cell.lt[1]]) ax.plot([cell.lb[0], cell.lt[0]],
[cell.lb[1], cell.lt[1]])
if cell.right: if cell.right:
ax.plot([cell.rb[0], cell.rt[0]], [cell.rb[1], cell.rt[1]]) ax.plot([cell.rb[0], cell.rt[0]],
[cell.rb[1], cell.rt[1]])
if cell.top: if cell.top:
ax.plot([cell.lt[0], cell.rt[0]], [cell.lt[1], cell.rt[1]]) ax.plot([cell.lt[0], cell.rt[0]],
[cell.lt[1], cell.rt[1]])
if cell.bottom: if cell.bottom:
ax.plot([cell.lb[0], cell.rb[0]], [cell.lb[1], cell.rb[1]]) ax.plot([cell.lb[0], cell.rb[0]],
[cell.lb[1], cell.rb[1]])
return fig return fig
def contour(self, table): def contour(self, table):
@ -118,7 +124,7 @@ class PlotMethods(object):
img, table_bbox = (None, {table._bbox: None}) img, table_bbox = (None, {table._bbox: None})
_FOR_LATTICE = False _FOR_LATTICE = False
fig = plt.figure() fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal") ax = fig.add_subplot(111, aspect='equal')
xs, ys = [], [] xs, ys = [], []
if not _FOR_LATTICE: if not _FOR_LATTICE:
@ -127,14 +133,21 @@ class PlotMethods(object):
ys.extend([t[1], t[3]]) ys.extend([t[1], t[3]])
ax.add_patch( ax.add_patch(
patches.Rectangle( patches.Rectangle(
(t[0], t[1]), t[2] - t[0], t[3] - t[1], color="blue" (t[0], t[1]),
t[2] - t[0],
t[3] - t[1],
color='blue'
) )
) )
for t in table_bbox.keys(): for t in table_bbox.keys():
ax.add_patch( ax.add_patch(
patches.Rectangle( patches.Rectangle(
(t[0], t[1]), t[2] - t[0], t[3] - t[1], fill=False, color="red" (t[0], t[1]),
t[2] - t[0],
t[3] - t[1],
fill=False,
color='red'
) )
) )
if not _FOR_LATTICE: if not _FOR_LATTICE:
@ -160,19 +173,25 @@ class PlotMethods(object):
""" """
fig = plt.figure() fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal") ax = fig.add_subplot(111, aspect='equal')
xs, ys = [], [] xs, ys = [], []
for t in table._text: for t in table._text:
xs.extend([t[0], t[2]]) xs.extend([t[0], t[2]])
ys.extend([t[1], t[3]]) ys.extend([t[1], t[3]])
ax.add_patch( ax.add_patch(
patches.Rectangle((t[0], t[1]), t[2] - t[0], t[3] - t[1], color="blue") patches.Rectangle(
(t[0], t[1]),
t[2] - t[0],
t[3] - t[1],
color='blue'
)
) )
ax.set_xlim(min(xs) - 10, max(xs) + 10) ax.set_xlim(min(xs) - 10, max(xs) + 10)
ax.set_ylim(min(ys) - 10, max(ys) + 10) ax.set_ylim(min(ys) - 10, max(ys) + 10)
for te in table._textedges: for te in table._textedges:
ax.plot([te.x, te.x], [te.y0, te.y1]) ax.plot([te.x, te.x],
[te.y0, te.y1])
return fig return fig
@ -191,14 +210,14 @@ class PlotMethods(object):
""" """
img, table_bbox = table._image img, table_bbox = table._image
fig = plt.figure() fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal") ax = fig.add_subplot(111, aspect='equal')
x_coord = [] x_coord = []
y_coord = [] y_coord = []
for k in table_bbox.keys(): for k in table_bbox.keys():
for coord in table_bbox[k]: for coord in table_bbox[k]:
x_coord.append(coord[0]) x_coord.append(coord[0])
y_coord.append(coord[1]) y_coord.append(coord[1])
ax.plot(x_coord, y_coord, "ro") ax.plot(x_coord, y_coord, 'ro')
ax.imshow(img) ax.imshow(img)
return fig return fig
@ -216,7 +235,7 @@ class PlotMethods(object):
""" """
fig = plt.figure() fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal") ax = fig.add_subplot(111, aspect='equal')
vertical, horizontal = table._segments vertical, horizontal = table._segments
for v in vertical: for v in vertical:
ax.plot([v[0], v[2]], [v[1], v[3]]) ax.plot([v[0], v[2]], [v[1], v[3]])

View File

@ -1,7 +1,8 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import division
import os import os
import re import sys
import random import random
import shutil import shutil
import string import string
@ -18,22 +19,23 @@ from pdfminer.pdfpage import PDFTextExtractionNotAllowed
from pdfminer.pdfinterp import PDFResourceManager from pdfminer.pdfinterp import PDFResourceManager
from pdfminer.pdfinterp import PDFPageInterpreter from pdfminer.pdfinterp import PDFPageInterpreter
from pdfminer.converter import PDFPageAggregator from pdfminer.converter import PDFPageAggregator
from pdfminer.layout import ( from pdfminer.layout import (LAParams, LTAnno, LTChar, LTTextLineHorizontal,
LAParams, LTTextLineVertical)
LTAnno,
LTChar,
LTTextLineHorizontal,
LTTextLineVertical,
LTImage,
)
from urllib.request import Request, urlopen
from urllib.parse import urlparse as parse_url PY3 = sys.version_info[0] >= 3
from urllib.parse import uses_relative, uses_netloc, uses_params if PY3:
from urllib.request import urlopen
from urllib.parse import urlparse as parse_url
from urllib.parse import uses_relative, uses_netloc, uses_params
else:
from urllib2 import urlopen
from urlparse import urlparse as parse_url
from urlparse import uses_relative, uses_netloc, uses_params
_VALID_URLS = set(uses_relative + uses_netloc + uses_params) _VALID_URLS = set(uses_relative + uses_netloc + uses_params)
_VALID_URLS.discard("") _VALID_URLS.discard('')
# https://github.com/pandas-dev/pandas/blob/master/pandas/io/common.py # https://github.com/pandas-dev/pandas/blob/master/pandas/io/common.py
@ -57,11 +59,9 @@ def is_url(url):
def random_string(length): def random_string(length):
ret = "" ret = ''
while length: while length:
ret += random.choice( ret += random.choice(string.digits + string.ascii_lowercase + string.ascii_uppercase)
string.digits + string.ascii_lowercase + string.ascii_uppercase
)
length -= 1 length -= 1
return ret return ret
@ -79,13 +79,14 @@ def download_url(url):
Temporary filepath. Temporary filepath.
""" """
filename = f"{random_string(6)}.pdf" filename = '{}.pdf'.format(random_string(6))
with tempfile.NamedTemporaryFile("wb", delete=False) as f: with tempfile.NamedTemporaryFile('wb', delete=False) as f:
headers = {"User-Agent": "Mozilla/5.0"} obj = urlopen(url)
request = Request(url, None, headers) if PY3:
obj = urlopen(request) content_type = obj.info().get_content_type()
content_type = obj.info().get_content_type() else:
if content_type != "application/pdf": content_type = obj.info().getheader('Content-Type')
if content_type != 'application/pdf':
raise NotImplementedError("File format not supported") raise NotImplementedError("File format not supported")
f.write(obj.read()) f.write(obj.read())
filepath = os.path.join(os.path.dirname(f.name), filename) filepath = os.path.join(os.path.dirname(f.name), filename)
@ -93,37 +94,39 @@ def download_url(url):
return filepath return filepath
stream_kwargs = ["columns", "edge_tol", "row_tol", "column_tol"] stream_kwargs = [
'columns',
'row_tol',
'column_tol'
]
lattice_kwargs = [ lattice_kwargs = [
"process_background", 'process_background',
"line_scale", 'line_size_scaling',
"copy_text", 'copy_text',
"shift_text", 'shift_text',
"line_tol", 'line_tol',
"joint_tol", 'joint_tol',
"threshold_blocksize", 'threshold_blocksize',
"threshold_constant", 'threshold_constant',
"iterations", 'iterations'
"resolution",
] ]
def validate_input(kwargs, flavor="lattice"): def validate_input(kwargs, flavor='lattice'):
def check_intersection(parser_kwargs, input_kwargs): def check_intersection(parser_kwargs, input_kwargs):
isec = set(parser_kwargs).intersection(set(input_kwargs.keys())) isec = set(parser_kwargs).intersection(set(input_kwargs.keys()))
if isec: if isec:
raise ValueError( raise ValueError("{} cannot be used with flavor='{}'".format(
f"{','.join(sorted(isec))} cannot be used with flavor='{flavor}'" ",".join(sorted(isec)), flavor))
)
if flavor == "lattice": if flavor == 'lattice':
check_intersection(stream_kwargs, kwargs) check_intersection(stream_kwargs, kwargs)
else: else:
check_intersection(lattice_kwargs, kwargs) check_intersection(lattice_kwargs, kwargs)
def remove_extra(kwargs, flavor="lattice"): def remove_extra(kwargs, flavor='lattice'):
if flavor == "lattice": if flavor == 'lattice':
for key in kwargs.keys(): for key in kwargs.keys():
if key in stream_kwargs: if key in stream_kwargs:
kwargs.pop(key) kwargs.pop(key)
@ -253,33 +256,29 @@ def scale_image(tables, v_segments, h_segments, factors):
v_segments_new = [] v_segments_new = []
for v in v_segments: for v in v_segments:
x1, x2 = scale(v[0], scaling_factor_x), scale(v[2], scaling_factor_x) x1, x2 = scale(v[0], scaling_factor_x), scale(v[2], scaling_factor_x)
y1, y2 = ( y1, y2 = scale(abs(translate(-img_y, v[1])), scaling_factor_y), scale(
scale(abs(translate(-img_y, v[1])), scaling_factor_y), abs(translate(-img_y, v[3])), scaling_factor_y)
scale(abs(translate(-img_y, v[3])), scaling_factor_y),
)
v_segments_new.append((x1, y1, x2, y2)) v_segments_new.append((x1, y1, x2, y2))
h_segments_new = [] h_segments_new = []
for h in h_segments: for h in h_segments:
x1, x2 = scale(h[0], scaling_factor_x), scale(h[2], scaling_factor_x) x1, x2 = scale(h[0], scaling_factor_x), scale(h[2], scaling_factor_x)
y1, y2 = ( y1, y2 = scale(abs(translate(-img_y, h[1])), scaling_factor_y), scale(
scale(abs(translate(-img_y, h[1])), scaling_factor_y), abs(translate(-img_y, h[3])), scaling_factor_y)
scale(abs(translate(-img_y, h[3])), scaling_factor_y),
)
h_segments_new.append((x1, y1, x2, y2)) h_segments_new.append((x1, y1, x2, y2))
return tables_new, v_segments_new, h_segments_new return tables_new, v_segments_new, h_segments_new
def get_rotation(chars, horizontal_text, vertical_text): def get_rotation(lttextlh, lttextlv, ltchar):
"""Detects if text in table is rotated or not using the current """Detects if text in table is rotated or not using the current
transformation matrix (CTM) and returns its orientation. transformation matrix (CTM) and returns its orientation.
Parameters Parameters
---------- ----------
horizontal_text : list lttextlh : list
List of PDFMiner LTTextLineHorizontal objects. List of PDFMiner LTTextLineHorizontal objects.
vertical_text : list lttextlv : list
List of PDFMiner LTTextLineVertical objects. List of PDFMiner LTTextLineVertical objects.
ltchar : list ltchar : list
List of PDFMiner LTChar objects. List of PDFMiner LTChar objects.
@ -292,13 +291,13 @@ def get_rotation(chars, horizontal_text, vertical_text):
rotated 90 degree clockwise. rotated 90 degree clockwise.
""" """
rotation = "" rotation = ''
hlen = len([t for t in horizontal_text if t.get_text().strip()]) hlen = len([t for t in lttextlh if t.get_text().strip()])
vlen = len([t for t in vertical_text if t.get_text().strip()]) vlen = len([t for t in lttextlv if t.get_text().strip()])
if hlen < vlen: if hlen < vlen:
clockwise = sum(t.matrix[1] < 0 and t.matrix[2] > 0 for t in chars) clockwise = sum(t.matrix[1] < 0 and t.matrix[2] > 0 for t in ltchar)
anticlockwise = sum(t.matrix[1] > 0 and t.matrix[2] < 0 for t in chars) anticlockwise = sum(t.matrix[1] > 0 and t.matrix[2] < 0 for t in ltchar)
rotation = "anticlockwise" if clockwise < anticlockwise else "clockwise" rotation = 'anticlockwise' if clockwise < anticlockwise else 'clockwise'
return rotation return rotation
@ -326,16 +325,10 @@ def segments_in_bbox(bbox, v_segments, h_segments):
""" """
lb = (bbox[0], bbox[1]) lb = (bbox[0], bbox[1])
rt = (bbox[2], bbox[3]) rt = (bbox[2], bbox[3])
v_s = [ v_s = [v for v in v_segments if v[1] > lb[1] - 2 and
v v[3] < rt[1] + 2 and lb[0] - 2 <= v[0] <= rt[0] + 2]
for v in v_segments h_s = [h for h in h_segments if h[0] > lb[0] - 2 and
if v[1] > lb[1] - 2 and v[3] < rt[1] + 2 and lb[0] - 2 <= v[0] <= rt[0] + 2 h[2] < rt[0] + 2 and lb[1] - 2 <= h[1] <= rt[1] + 2]
]
h_s = [
h
for h in h_segments
if h[0] > lb[0] - 2 and h[2] < rt[0] + 2 and lb[1] - 2 <= h[1] <= rt[1] + 2
]
return v_s, h_s return v_s, h_s
@ -346,115 +339,22 @@ def text_in_bbox(bbox, text):
---------- ----------
bbox : tuple bbox : tuple
Tuple (x1, y1, x2, y2) representing a bounding box where Tuple (x1, y1, x2, y2) representing a bounding box where
(x1, y1) -> lb and (x2, y2) -> rt in the PDF coordinate (x1, y1) -> lb and (x2, y2) -> rt in PDFMiner coordinate
space. space.
text : List of PDFMiner text objects. text : List of PDFMiner text objects.
Returns Returns
------- -------
t_bbox : list t_bbox : list
List of PDFMiner text objects that lie inside table, discarding the overlapping ones List of PDFMiner text objects that lie inside table.
""" """
lb = (bbox[0], bbox[1]) lb = (bbox[0], bbox[1])
rt = (bbox[2], bbox[3]) rt = (bbox[2], bbox[3])
t_bbox = [ t_bbox = [t for t in text if lb[0] - 2 <= (t.x0 + t.x1) / 2.0
t <= rt[0] + 2 and lb[1] - 2 <= (t.y0 + t.y1) / 2.0
for t in text <= rt[1] + 2]
if lb[0] - 2 <= (t.x0 + t.x1) / 2.0 <= rt[0] + 2 return t_bbox
and lb[1] - 2 <= (t.y0 + t.y1) / 2.0 <= rt[1] + 2
]
# Avoid duplicate text by discarding overlapping boxes
rest = {t for t in t_bbox}
for ba in t_bbox:
for bb in rest.copy():
if ba == bb:
continue
if bbox_intersect(ba, bb):
# if the intersection is larger than 80% of ba's size, we keep the longest
if (bbox_intersection_area(ba, bb) / bbox_area(ba)) > 0.8:
if bbox_longer(bb, ba):
rest.discard(ba)
unique_boxes = list(rest)
return unique_boxes
def bbox_intersection_area(ba, bb) -> float:
"""Returns area of the intersection of the bounding boxes of two PDFMiner objects.
Parameters
----------
ba : PDFMiner text object
bb : PDFMiner text object
Returns
-------
intersection_area : float
Area of the intersection of the bounding boxes of both objects
"""
x_left = max(ba.x0, bb.x0)
y_top = min(ba.y1, bb.y1)
x_right = min(ba.x1, bb.x1)
y_bottom = max(ba.y0, bb.y0)
if x_right < x_left or y_bottom > y_top:
return 0.0
intersection_area = (x_right - x_left) * (y_top - y_bottom)
return intersection_area
def bbox_area(bb) -> float:
"""Returns area of the bounding box of a PDFMiner object.
Parameters
----------
bb : PDFMiner text object
Returns
-------
area : float
Area of the bounding box of the object
"""
return (bb.x1 - bb.x0) * (bb.y1 - bb.y0)
def bbox_intersect(ba, bb) -> bool:
"""Returns True if the bounding boxes of two PDFMiner objects intersect.
Parameters
----------
ba : PDFMiner text object
bb : PDFMiner text object
Returns
-------
overlaps : bool
True if the bounding boxes intersect
"""
return ba.x1 >= bb.x0 and bb.x1 >= ba.x0 and ba.y1 >= bb.y0 and bb.y1 >= ba.y0
def bbox_longer(ba, bb) -> bool:
"""Returns True if the bounding box of the first PDFMiner object is longer or equal to the second.
Parameters
----------
ba : PDFMiner text object
bb : PDFMiner text object
Returns
-------
longer : bool
True if the bounding box of the first object is longer or equal
"""
return (ba.x1 - ba.x0) >= (bb.x1 - bb.x0)
def merge_close_lines(ar, line_tol=2): def merge_close_lines(ar, line_tol=2):
@ -485,33 +385,12 @@ def merge_close_lines(ar, line_tol=2):
return ret return ret
def text_strip(text, strip=""):
"""Strips any characters in `strip` that are present in `text`.
Parameters
----------
text : str
Text to process and strip.
strip : str, optional (default: '')
Characters that should be stripped from `text`.
Returns
-------
stripped : str
"""
if not strip:
return text
stripped = re.sub(
fr"[{''.join(map(re.escape, strip))}]", "", text, flags=re.UNICODE
)
return stripped
# TODO: combine the following functions into a TextProcessor class which # TODO: combine the following functions into a TextProcessor class which
# applies corresponding transformations sequentially # applies corresponding transformations sequentially
# (inspired from sklearn.pipeline.Pipeline) # (inspired from sklearn.pipeline.Pipeline)
def flag_font_size(textline, direction, strip_text=""): def flag_font_size(textline, direction, strip_text=''):
"""Flags super/subscripts in text by enclosing them with <s></s>. """Flags super/subscripts in text by enclosing them with <s></s>.
May give false positives. May give false positives.
@ -530,18 +409,10 @@ def flag_font_size(textline, direction, strip_text=""):
fstring : string fstring : string
""" """
if direction == "horizontal": if direction == 'horizontal':
d = [ d = [(t.get_text(), np.round(t.height, decimals=6)) for t in textline if not isinstance(t, LTAnno)]
(t.get_text(), np.round(t.height, decimals=6)) elif direction == 'vertical':
for t in textline d = [(t.get_text(), np.round(t.width, decimals=6)) for t in textline if not isinstance(t, LTAnno)]
if not isinstance(t, LTAnno)
]
elif direction == "vertical":
d = [
(t.get_text(), np.round(t.width, decimals=6))
for t in textline
if not isinstance(t, LTAnno)
]
l = [np.round(size, decimals=6) for text, size in d] l = [np.round(size, decimals=6) for text, size in d]
if len(set(l)) > 1: if len(set(l)) > 1:
flist = [] flist = []
@ -549,21 +420,21 @@ def flag_font_size(textline, direction, strip_text=""):
for key, chars in groupby(d, itemgetter(1)): for key, chars in groupby(d, itemgetter(1)):
if key == min_size: if key == min_size:
fchars = [t[0] for t in chars] fchars = [t[0] for t in chars]
if "".join(fchars).strip(): if ''.join(fchars).strip():
fchars.insert(0, "<s>") fchars.insert(0, '<s>')
fchars.append("</s>") fchars.append('</s>')
flist.append("".join(fchars)) flist.append(''.join(fchars))
else: else:
fchars = [t[0] for t in chars] fchars = [t[0] for t in chars]
if "".join(fchars).strip(): if ''.join(fchars).strip():
flist.append("".join(fchars)) flist.append(''.join(fchars))
fstring = "".join(flist) fstring = ''.join(flist).strip(strip_text)
else: else:
fstring = "".join([t.get_text() for t in textline]) fstring = ''.join([t.get_text() for t in textline]).strip(strip_text)
return text_strip(fstring, strip_text) return fstring
def split_textline(table, textline, direction, flag_size=False, strip_text=""): def split_textline(table, textline, direction, flag_size=False, strip_text=''):
"""Splits PDFMiner LTTextLine into substrings if it spans across """Splits PDFMiner LTTextLine into substrings if it spans across
multiple rows/columns. multiple rows/columns.
@ -593,70 +464,38 @@ def split_textline(table, textline, direction, flag_size=False, strip_text=""):
cut_text = [] cut_text = []
bbox = textline.bbox bbox = textline.bbox
try: try:
if direction == "horizontal" and not textline.is_empty(): if direction == 'horizontal' and not textline.is_empty():
x_overlap = [ x_overlap = [i for i, x in enumerate(table.cols) if x[0] <= bbox[2] and bbox[0] <= x[1]]
i r_idx = [j for j, r in enumerate(table.rows) if r[1] <= (bbox[1] + bbox[3]) / 2 <= r[0]]
for i, x in enumerate(table.cols)
if x[0] <= bbox[2] and bbox[0] <= x[1]
]
r_idx = [
j
for j, r in enumerate(table.rows)
if r[1] <= (bbox[1] + bbox[3]) / 2 <= r[0]
]
r = r_idx[0] r = r_idx[0]
x_cuts = [ x_cuts = [(c, table.cells[r][c].x2) for c in x_overlap if table.cells[r][c].right]
(c, table.cells[r][c].x2) for c in x_overlap if table.cells[r][c].right
]
if not x_cuts: if not x_cuts:
x_cuts = [(x_overlap[0], table.cells[r][-1].x2)] x_cuts = [(x_overlap[0], table.cells[r][-1].x2)]
for obj in textline._objs: for obj in textline._objs:
row = table.rows[r] row = table.rows[r]
for cut in x_cuts: for cut in x_cuts:
if isinstance(obj, LTChar): if isinstance(obj, LTChar):
if ( if (row[1] <= (obj.y0 + obj.y1) / 2 <= row[0] and
row[1] <= (obj.y0 + obj.y1) / 2 <= row[0] (obj.x0 + obj.x1) / 2 <= cut[1]):
and (obj.x0 + obj.x1) / 2 <= cut[1]
):
cut_text.append((r, cut[0], obj)) cut_text.append((r, cut[0], obj))
break break
else:
# TODO: add test
if cut == x_cuts[-1]:
cut_text.append((r, cut[0] + 1, obj))
elif isinstance(obj, LTAnno): elif isinstance(obj, LTAnno):
cut_text.append((r, cut[0], obj)) cut_text.append((r, cut[0], obj))
elif direction == "vertical" and not textline.is_empty(): elif direction == 'vertical' and not textline.is_empty():
y_overlap = [ y_overlap = [j for j, y in enumerate(table.rows) if y[1] <= bbox[3] and bbox[1] <= y[0]]
j c_idx = [i for i, c in enumerate(table.cols) if c[0] <= (bbox[0] + bbox[2]) / 2 <= c[1]]
for j, y in enumerate(table.rows)
if y[1] <= bbox[3] and bbox[1] <= y[0]
]
c_idx = [
i
for i, c in enumerate(table.cols)
if c[0] <= (bbox[0] + bbox[2]) / 2 <= c[1]
]
c = c_idx[0] c = c_idx[0]
y_cuts = [ y_cuts = [(r, table.cells[r][c].y1) for r in y_overlap if table.cells[r][c].bottom]
(r, table.cells[r][c].y1) for r in y_overlap if table.cells[r][c].bottom
]
if not y_cuts: if not y_cuts:
y_cuts = [(y_overlap[0], table.cells[-1][c].y1)] y_cuts = [(y_overlap[0], table.cells[-1][c].y1)]
for obj in textline._objs: for obj in textline._objs:
col = table.cols[c] col = table.cols[c]
for cut in y_cuts: for cut in y_cuts:
if isinstance(obj, LTChar): if isinstance(obj, LTChar):
if ( if (col[0] <= (obj.x0 + obj.x1) / 2 <= col[1] and
col[0] <= (obj.x0 + obj.x1) / 2 <= col[1] (obj.y0 + obj.y1) / 2 >= cut[1]):
and (obj.y0 + obj.y1) / 2 >= cut[1]
):
cut_text.append((cut[0], c, obj)) cut_text.append((cut[0], c, obj))
break break
else:
# TODO: add test
if cut == y_cuts[-1]:
cut_text.append((cut[0] - 1, c, obj))
elif isinstance(obj, LTAnno): elif isinstance(obj, LTAnno):
cut_text.append((cut[0], c, obj)) cut_text.append((cut[0], c, obj))
except IndexError: except IndexError:
@ -664,26 +503,15 @@ def split_textline(table, textline, direction, flag_size=False, strip_text=""):
grouped_chars = [] grouped_chars = []
for key, chars in groupby(cut_text, itemgetter(0, 1)): for key, chars in groupby(cut_text, itemgetter(0, 1)):
if flag_size: if flag_size:
grouped_chars.append( grouped_chars.append((key[0], key[1],
( flag_font_size([t[2] for t in chars], direction, strip_text=strip_text)))
key[0],
key[1],
flag_font_size(
[t[2] for t in chars], direction, strip_text=strip_text
),
)
)
else: else:
gchars = [t[2].get_text() for t in chars] gchars = [t[2].get_text() for t in chars]
grouped_chars.append( grouped_chars.append((key[0], key[1], ''.join(gchars).strip(strip_text)))
(key[0], key[1], text_strip("".join(gchars), strip_text))
)
return grouped_chars return grouped_chars
def get_table_index( def get_table_index(table, t, direction, split_text=False, flag_size=False, strip_text='',):
table, t, direction, split_text=False, flag_size=False, strip_text=""
):
"""Gets indices of the table cell where given text object lies by """Gets indices of the table cell where given text object lies by
comparing their y and x-coordinates. comparing their y and x-coordinates.
@ -722,9 +550,8 @@ def get_table_index(
""" """
r_idx, c_idx = [-1] * 2 r_idx, c_idx = [-1] * 2
for r in range(len(table.rows)): for r in range(len(table.rows)):
if (t.y0 + t.y1) / 2.0 < table.rows[r][0] and (t.y0 + t.y1) / 2.0 > table.rows[ if ((t.y0 + t.y1) / 2.0 < table.rows[r][0] and
r (t.y0 + t.y1) / 2.0 > table.rows[r][1]):
][1]:
lt_col_overlap = [] lt_col_overlap = []
for c in table.cols: for c in table.cols:
if c[0] <= t.x1 and c[1] >= t.x0: if c[0] <= t.x1 and c[1] >= t.x0:
@ -734,12 +561,11 @@ def get_table_index(
else: else:
lt_col_overlap.append(-1) lt_col_overlap.append(-1)
if len(list(filter(lambda x: x != -1, lt_col_overlap))) == 0: if len(list(filter(lambda x: x != -1, lt_col_overlap))) == 0:
text = t.get_text().strip("\n") text = t.get_text().strip('\n')
text_range = (t.x0, t.x1) text_range = (t.x0, t.x1)
col_range = (table.cols[0][0], table.cols[-1][1]) col_range = (table.cols[0][0], table.cols[-1][1])
warnings.warn( warnings.warn("{} {} does not lie in column range {}".format(
f"{text} {text_range} does not lie in column range {col_range}" text, text_range, col_range))
)
r_idx = r r_idx = r
c_idx = lt_col_overlap.index(max(lt_col_overlap)) c_idx = lt_col_overlap.index(max(lt_col_overlap))
break break
@ -760,26 +586,12 @@ def get_table_index(
error = ((X * (y0_offset + y1_offset)) + (Y * (x0_offset + x1_offset))) / charea error = ((X * (y0_offset + y1_offset)) + (Y * (x0_offset + x1_offset))) / charea
if split_text: if split_text:
return ( return split_textline(table, t, direction, flag_size=flag_size, strip_text=strip_text), error
split_textline(
table, t, direction, flag_size=flag_size, strip_text=strip_text
),
error,
)
else: else:
if flag_size: if flag_size:
return ( return [(r_idx, c_idx, flag_font_size(t._objs, direction, strip_text=strip_text))], error
[
(
r_idx,
c_idx,
flag_font_size(t._objs, direction, strip_text=strip_text),
)
],
error,
)
else: else:
return [(r_idx, c_idx, text_strip(t.get_text(), strip_text))], error return [(r_idx, c_idx, t.get_text().strip(strip_text))], error
def compute_accuracy(error_weights): def compute_accuracy(error_weights):
@ -830,35 +642,25 @@ def compute_whitespace(d):
r_nempty_cells, c_nempty_cells = [], [] r_nempty_cells, c_nempty_cells = [], []
for i in d: for i in d:
for j in i: for j in i:
if j.strip() == "": if j.strip() == '':
whitespace += 1 whitespace += 1
whitespace = 100 * (whitespace / float(len(d) * len(d[0]))) whitespace = 100 * (whitespace / float(len(d) * len(d[0])))
return whitespace return whitespace
def get_page_layout( def get_page_layout(filename, char_margin=1.0, line_margin=0.5, word_margin=0.1,
filename, detect_vertical=True, all_texts=True):
line_overlap=0.5,
char_margin=1.0,
line_margin=0.5,
word_margin=0.1,
boxes_flow=0.5,
detect_vertical=True,
all_texts=True,
):
"""Returns a PDFMiner LTPage object and page dimension of a single """Returns a PDFMiner LTPage object and page dimension of a single
page pdf. To get the definitions of kwargs, see page pdf. See https://euske.github.io/pdfminer/ to get definitions
https://pdfminersix.rtfd.io/en/latest/reference/composable.html. of kwargs.
Parameters Parameters
---------- ----------
filename : string filename : string
Path to pdf file. Path to pdf file.
line_overlap : float
char_margin : float char_margin : float
line_margin : float line_margin : float
word_margin : float word_margin : float
boxes_flow : float
detect_vertical : bool detect_vertical : bool
all_texts : bool all_texts : bool
@ -870,22 +672,16 @@ def get_page_layout(
Dimension of pdf page in the form (width, height). Dimension of pdf page in the form (width, height).
""" """
with open(filename, "rb") as f: with open(filename, 'rb') as f:
parser = PDFParser(f) parser = PDFParser(f)
document = PDFDocument(parser) document = PDFDocument(parser)
if not document.is_extractable: if not document.is_extractable:
raise PDFTextExtractionNotAllowed( raise PDFTextExtractionNotAllowed
f"Text extraction is not allowed: {filename}" laparams = LAParams(char_margin=char_margin,
) line_margin=line_margin,
laparams = LAParams( word_margin=word_margin,
line_overlap=line_overlap, detect_vertical=detect_vertical,
char_margin=char_margin, all_texts=all_texts)
line_margin=line_margin,
word_margin=word_margin,
boxes_flow=boxes_flow,
detect_vertical=detect_vertical,
all_texts=all_texts,
)
rsrcmgr = PDFResourceManager() rsrcmgr = PDFResourceManager()
device = PDFPageAggregator(rsrcmgr, laparams=laparams) device = PDFPageAggregator(rsrcmgr, laparams=laparams)
interpreter = PDFPageInterpreter(rsrcmgr, device) interpreter = PDFPageInterpreter(rsrcmgr, device)
@ -919,11 +715,9 @@ def get_text_objects(layout, ltype="char", t=None):
""" """
if ltype == "char": if ltype == "char":
LTObject = LTChar LTObject = LTChar
elif ltype == "image": elif ltype == "lh":
LTObject = LTImage
elif ltype == "horizontal_text":
LTObject = LTTextLineHorizontal LTObject = LTTextLineHorizontal
elif ltype == "vertical_text": elif ltype == "lv":
LTObject = LTTextLineVertical LTObject = LTTextLineVertical
if t is None: if t is None:
t = [] t = []

View File

@ -1,4 +0,0 @@
"Età dellAssicuratoallepoca del decesso","Misura % dimaggiorazione"
"18-75","1,00%"
"76-80","0,50%"
"81 in poi","0,10%"
1 Età dell’Assicuratoall’epoca del decesso Misura % dimaggiorazione
2 18-75 1,00%
3 76-80 0,50%
4 81 in poi 0,10%

Binary file not shown.

View File

@ -4,13 +4,13 @@
</a> </a>
</p> </p>
<p> <p>
<iframe src="https://ghbtns.com/github-btn.html?user=camelot-dev&repo=camelot&type=watch&count=true&size=large" <iframe src="https://ghbtns.com/github-btn.html?user=socialcopsdev&repo=camelot&type=watch&count=true&size=large"
allowtransparency="true" frameborder="0" scrolling="0" width="200px" height="35px"></iframe> allowtransparency="true" frameborder="0" scrolling="0" width="200px" height="35px"></iframe>
</p> </p>
<h3>Useful Links</h3> <h3>Useful Links</h3>
<ul> <ul>
<li><a href="https://github.com/camelot-dev/camelot">Camelot @ GitHub</a></li> <li><a href="https://github.com/socialcopsdev/camelot">Camelot @ GitHub</a></li>
<li><a href="https://pypi.org/project/camelot-py/">Camelot @ PyPI</a></li> <li><a href="https://pypi.org/project/camelot-py/">Camelot @ PyPI</a></li>
<li><a href="https://github.com/camelot-dev/camelot/issues">Issue Tracker</a></li> <li><a href="https://github.com/socialcopsdev/camelot/issues">Issue Tracker</a></li>
</ul> </ul>

View File

@ -4,6 +4,6 @@
</a> </a>
</p> </p>
<p> <p>
<iframe src="https://ghbtns.com/github-btn.html?user=camelot-dev&repo=camelot&type=watch&count=true&size=large" <iframe src="https://ghbtns.com/github-btn.html?user=socialcopsdev&repo=camelot&type=watch&count=true&size=large"
allowtransparency="true" frameborder="0" scrolling="0" width="200px" height="35px"></iframe> allowtransparency="true" frameborder="0" scrolling="0" width="200px" height="35px"></iframe>
</p> </p>

View File

@ -1,19 +1,7 @@
# flasky pygments style based on tango style # flasky pygments style based on tango style
from pygments.style import Style from pygments.style import Style
from pygments.token import ( from pygments.token import Keyword, Name, Comment, String, Error, \
Keyword, Number, Operator, Generic, Whitespace, Punctuation, Other, Literal
Name,
Comment,
String,
Error,
Number,
Operator,
Generic,
Whitespace,
Punctuation,
Other,
Literal,
)
class FlaskyStyle(Style): class FlaskyStyle(Style):
@ -23,67 +11,76 @@ class FlaskyStyle(Style):
styles = { styles = {
# No corresponding class for the following: # No corresponding class for the following:
# Text: "", # class: '' # Text: "", # class: ''
Whitespace: "underline #f8f8f8", # class: 'w' Whitespace: "underline #f8f8f8", # class: 'w'
Error: "#a40000 border:#ef2929", # class: 'err' Error: "#a40000 border:#ef2929", # class: 'err'
Other: "#000000", # class 'x' Other: "#000000", # class 'x'
Comment: "italic #8f5902", # class: 'c'
Comment.Preproc: "noitalic", # class: 'cp' Comment: "italic #8f5902", # class: 'c'
Keyword: "bold #004461", # class: 'k' Comment.Preproc: "noitalic", # class: 'cp'
Keyword.Constant: "bold #004461", # class: 'kc'
Keyword.Declaration: "bold #004461", # class: 'kd' Keyword: "bold #004461", # class: 'k'
Keyword.Namespace: "bold #004461", # class: 'kn' Keyword.Constant: "bold #004461", # class: 'kc'
Keyword.Pseudo: "bold #004461", # class: 'kp' Keyword.Declaration: "bold #004461", # class: 'kd'
Keyword.Reserved: "bold #004461", # class: 'kr' Keyword.Namespace: "bold #004461", # class: 'kn'
Keyword.Type: "bold #004461", # class: 'kt' Keyword.Pseudo: "bold #004461", # class: 'kp'
Operator: "#582800", # class: 'o' Keyword.Reserved: "bold #004461", # class: 'kr'
Operator.Word: "bold #004461", # class: 'ow' - like keywords Keyword.Type: "bold #004461", # class: 'kt'
Punctuation: "bold #000000", # class: 'p'
Operator: "#582800", # class: 'o'
Operator.Word: "bold #004461", # class: 'ow' - like keywords
Punctuation: "bold #000000", # class: 'p'
# because special names such as Name.Class, Name.Function, etc. # because special names such as Name.Class, Name.Function, etc.
# are not recognized as such later in the parsing, we choose them # are not recognized as such later in the parsing, we choose them
# to look the same as ordinary variables. # to look the same as ordinary variables.
Name: "#000000", # class: 'n' Name: "#000000", # class: 'n'
Name.Attribute: "#c4a000", # class: 'na' - to be revised Name.Attribute: "#c4a000", # class: 'na' - to be revised
Name.Builtin: "#004461", # class: 'nb' Name.Builtin: "#004461", # class: 'nb'
Name.Builtin.Pseudo: "#3465a4", # class: 'bp' Name.Builtin.Pseudo: "#3465a4", # class: 'bp'
Name.Class: "#000000", # class: 'nc' - to be revised Name.Class: "#000000", # class: 'nc' - to be revised
Name.Constant: "#000000", # class: 'no' - to be revised Name.Constant: "#000000", # class: 'no' - to be revised
Name.Decorator: "#888", # class: 'nd' - to be revised Name.Decorator: "#888", # class: 'nd' - to be revised
Name.Entity: "#ce5c00", # class: 'ni' Name.Entity: "#ce5c00", # class: 'ni'
Name.Exception: "bold #cc0000", # class: 'ne' Name.Exception: "bold #cc0000", # class: 'ne'
Name.Function: "#000000", # class: 'nf' Name.Function: "#000000", # class: 'nf'
Name.Property: "#000000", # class: 'py' Name.Property: "#000000", # class: 'py'
Name.Label: "#f57900", # class: 'nl' Name.Label: "#f57900", # class: 'nl'
Name.Namespace: "#000000", # class: 'nn' - to be revised Name.Namespace: "#000000", # class: 'nn' - to be revised
Name.Other: "#000000", # class: 'nx' Name.Other: "#000000", # class: 'nx'
Name.Tag: "bold #004461", # class: 'nt' - like a keyword Name.Tag: "bold #004461", # class: 'nt' - like a keyword
Name.Variable: "#000000", # class: 'nv' - to be revised Name.Variable: "#000000", # class: 'nv' - to be revised
Name.Variable.Class: "#000000", # class: 'vc' - to be revised Name.Variable.Class: "#000000", # class: 'vc' - to be revised
Name.Variable.Global: "#000000", # class: 'vg' - to be revised Name.Variable.Global: "#000000", # class: 'vg' - to be revised
Name.Variable.Instance: "#000000", # class: 'vi' - to be revised Name.Variable.Instance: "#000000", # class: 'vi' - to be revised
Number: "#990000", # class: 'm'
Literal: "#000000", # class: 'l' Number: "#990000", # class: 'm'
Literal.Date: "#000000", # class: 'ld'
String: "#4e9a06", # class: 's' Literal: "#000000", # class: 'l'
String.Backtick: "#4e9a06", # class: 'sb' Literal.Date: "#000000", # class: 'ld'
String.Char: "#4e9a06", # class: 'sc'
String.Doc: "italic #8f5902", # class: 'sd' - like a comment String: "#4e9a06", # class: 's'
String.Double: "#4e9a06", # class: 's2' String.Backtick: "#4e9a06", # class: 'sb'
String.Escape: "#4e9a06", # class: 'se' String.Char: "#4e9a06", # class: 'sc'
String.Heredoc: "#4e9a06", # class: 'sh' String.Doc: "italic #8f5902", # class: 'sd' - like a comment
String.Interpol: "#4e9a06", # class: 'si' String.Double: "#4e9a06", # class: 's2'
String.Other: "#4e9a06", # class: 'sx' String.Escape: "#4e9a06", # class: 'se'
String.Regex: "#4e9a06", # class: 'sr' String.Heredoc: "#4e9a06", # class: 'sh'
String.Single: "#4e9a06", # class: 's1' String.Interpol: "#4e9a06", # class: 'si'
String.Symbol: "#4e9a06", # class: 'ss' String.Other: "#4e9a06", # class: 'sx'
Generic: "#000000", # class: 'g' String.Regex: "#4e9a06", # class: 'sr'
Generic.Deleted: "#a40000", # class: 'gd' String.Single: "#4e9a06", # class: 's1'
Generic.Emph: "italic #000000", # class: 'ge' String.Symbol: "#4e9a06", # class: 'ss'
Generic.Error: "#ef2929", # class: 'gr'
Generic.Heading: "bold #000080", # class: 'gh' Generic: "#000000", # class: 'g'
Generic.Inserted: "#00A000", # class: 'gi' Generic.Deleted: "#a40000", # class: 'gd'
Generic.Output: "#888", # class: 'go' Generic.Emph: "italic #000000", # class: 'ge'
Generic.Prompt: "#745334", # class: 'gp' Generic.Error: "#ef2929", # class: 'gr'
Generic.Strong: "bold #000000", # class: 'gs' Generic.Heading: "bold #000080", # class: 'gh'
Generic.Subheading: "bold #800080", # class: 'gu' Generic.Inserted: "#00A000", # class: 'gi'
Generic.Traceback: "bold #a40000", # class: 'gt' Generic.Output: "#888", # class: 'go'
Generic.Prompt: "#745334", # class: 'gp'
Generic.Strong: "bold #000000", # class: 'gs'
Generic.Subheading: "bold #800080", # class: 'gu'
Generic.Traceback: "bold #a40000", # class: 'gt'
} }

View File

@ -22,8 +22,8 @@ import sys
# sys.path.insert(0, os.path.abspath('..')) # sys.path.insert(0, os.path.abspath('..'))
# Insert Camelot's path into the system. # Insert Camelot's path into the system.
sys.path.insert(0, os.path.abspath("..")) sys.path.insert(0, os.path.abspath('..'))
sys.path.insert(0, os.path.abspath("_themes")) sys.path.insert(0, os.path.abspath('_themes'))
import camelot import camelot
@ -38,33 +38,33 @@ import camelot
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones. # ones.
extensions = [ extensions = [
"sphinx.ext.autodoc", 'sphinx.ext.autodoc',
"sphinx.ext.napoleon", 'sphinx.ext.napoleon',
"sphinx.ext.intersphinx", 'sphinx.ext.intersphinx',
"sphinx.ext.todo", 'sphinx.ext.todo',
"sphinx.ext.viewcode", 'sphinx.ext.viewcode',
] ]
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"] templates_path = ['_templates']
# The suffix(es) of source filenames. # The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string: # You can specify multiple suffix as a list of string:
# #
# source_suffix = ['.rst', '.md'] # source_suffix = ['.rst', '.md']
source_suffix = ".rst" source_suffix = '.rst'
# The encoding of source files. # The encoding of source files.
# #
# source_encoding = 'utf-8-sig' # source_encoding = 'utf-8-sig'
# The master toctree document. # The master toctree document.
master_doc = "index" master_doc = 'index'
# General information about the project. # General information about the project.
project = u"Camelot" project = u'Camelot'
copyright = u"2021, Camelot Developers" copyright = u'2018, <a href="https://socialcops.com" target="_blank">SocialCops</a>'
author = u"Vinayak Mehta" author = u'Vinayak Mehta'
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the
@ -94,7 +94,7 @@ language = None
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path # This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ["_build"] exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all # The reST default role (used for this markup: `text`) to use for all
# documents. # documents.
@ -114,7 +114,7 @@ add_module_names = True
# show_authors = False # show_authors = False
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = "flask_theme_support.FlaskyStyle" pygments_style = 'flask_theme_support.FlaskyStyle'
# A list of ignored prefixes for module index sorting. # A list of ignored prefixes for module index sorting.
# modindex_common_prefix = [] # modindex_common_prefix = []
@ -130,18 +130,18 @@ todo_include_todos = True
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes. # a list of builtin themes.
html_theme = "alabaster" html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme # Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the # further. For a list of options available for each theme, see the
# documentation. # documentation.
html_theme_options = { html_theme_options = {
"show_powered_by": False, 'show_powered_by': False,
"github_user": "camelot-dev", 'github_user': 'socialcopsdev',
"github_repo": "camelot", 'github_repo': 'camelot',
"github_banner": True, 'github_banner': True,
"show_related": False, 'show_related': False,
"note_bg": "#FFF59C", 'note_bg': '#FFF59C'
} }
# Add any paths that contain custom themes here, relative to this directory. # Add any paths that contain custom themes here, relative to this directory.
@ -164,12 +164,12 @@ html_theme_options = {
# The name of an image file (relative to this directory) to use as a favicon of # The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large. # pixels large.
html_favicon = "_static/favicon.ico" html_favicon = '_static/favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files, # relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css". # so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"] html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or # Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied # .htaccess) here, relative to this directory. These files are copied
@ -189,21 +189,10 @@ html_use_smartypants = True
# Custom sidebar templates, maps document names to template names. # Custom sidebar templates, maps document names to template names.
html_sidebars = { html_sidebars = {
"index": [ 'index': ['sidebarintro.html', 'relations.html', 'sourcelink.html',
"sidebarintro.html", 'searchbox.html', 'hacks.html'],
"relations.html", '**': ['sidebarlogo.html', 'localtoc.html', 'relations.html',
"sourcelink.html", 'sourcelink.html', 'searchbox.html', 'hacks.html']
"searchbox.html",
"hacks.html",
],
"**": [
"sidebarlogo.html",
"localtoc.html",
"relations.html",
"sourcelink.html",
"searchbox.html",
"hacks.html",
],
} }
# Additional templates that should be rendered to pages, maps page names to # Additional templates that should be rendered to pages, maps page names to
@ -260,30 +249,34 @@ html_show_copyright = True
# html_search_scorer = 'scorer.js' # html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = "Camelotdoc" htmlhelp_basename = 'Camelotdoc'
# -- Options for LaTeX output --------------------------------------------- # -- Options for LaTeX output ---------------------------------------------
latex_elements = { latex_elements = {
# The paper size ('letterpaper' or 'a4paper'). # The paper size ('letterpaper' or 'a4paper').
# #
# 'papersize': 'letterpaper', # 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# # The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt', #
# Additional stuff for the LaTeX preamble. # 'pointsize': '10pt',
#
# 'preamble': '', # Additional stuff for the LaTeX preamble.
# Latex figure (float) alignment #
# # 'preamble': '',
# 'figure_align': 'htbp',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
} }
# Grouping the document tree into LaTeX files. List of tuples # Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, # (source start file, target name, title,
# author, documentclass [howto, manual, or own class]). # author, documentclass [howto, manual, or own class]).
latex_documents = [ latex_documents = [
(master_doc, "Camelot.tex", u"Camelot Documentation", u"Vinayak Mehta", "manual"), (master_doc, 'Camelot.tex', u'Camelot Documentation',
u'Vinayak Mehta', 'manual'),
] ]
# The name of an image file (relative to this directory) to place at the top of # The name of an image file (relative to this directory) to place at the top of
@ -323,7 +316,10 @@ latex_documents = [
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
man_pages = [(master_doc, "Camelot", u"Camelot Documentation", [author], 1)] man_pages = [
(master_doc, 'Camelot', u'Camelot Documentation',
[author], 1)
]
# If true, show URL addresses after external links. # If true, show URL addresses after external links.
# #
@ -336,15 +332,9 @@ man_pages = [(master_doc, "Camelot", u"Camelot Documentation", [author], 1)]
# (source start file, target name, title, author, # (source start file, target name, title, author,
# dir menu entry, description, category) # dir menu entry, description, category)
texinfo_documents = [ texinfo_documents = [
( (master_doc, 'Camelot', u'Camelot Documentation',
master_doc, author, 'Camelot', 'One line description of project.',
"Camelot", 'Miscellaneous'),
u"Camelot Documentation",
author,
"Camelot",
"One line description of project.",
"Miscellaneous",
),
] ]
# Documents to append as an appendix to all manuals. # Documents to append as an appendix to all manuals.
@ -366,6 +356,6 @@ texinfo_documents = [
# Example configuration for intersphinx: refer to the Python standard library. # Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = { intersphinx_mapping = {
"https://docs.python.org/2": None, 'https://docs.python.org/2': None,
"http://pandas.pydata.org/pandas-docs/stable": None, 'http://pandas.pydata.org/pandas-docs/stable': None
} }

View File

@ -29,15 +29,15 @@ Your first contribution
A great way to start contributing to Camelot is to pick an issue tagged with the `help wanted`_ or the `good first issue`_ tags. If you're unable to find a good first issue, feel free to contact the maintainer. A great way to start contributing to Camelot is to pick an issue tagged with the `help wanted`_ or the `good first issue`_ tags. If you're unable to find a good first issue, feel free to contact the maintainer.
.. _help wanted: https://github.com/camelot-dev/camelot/labels/help%20wanted .. _help wanted: https://github.com/socialcopsdev/camelot/labels/help%20wanted
.. _good first issue: https://github.com/camelot-dev/camelot/labels/good%20first%20issue .. _good first issue: https://github.com/socialcopsdev/camelot/labels/good%20first%20issue
Setting up a development environment Setting up a development environment
------------------------------------ ------------------------------------
To install the dependencies needed for development, you can use pip:: To install the dependencies needed for development, you can use pip::
$ pip install "camelot-py[dev]" $ pip install camelot-py[dev]
Alternatively, you can clone the project repository, and install using pip:: Alternatively, you can clone the project repository, and install using pip::
@ -51,7 +51,7 @@ Submit a pull request
The preferred workflow for contributing to Camelot is to fork the `project repository`_ on GitHub, clone, develop on a branch and then finally submit a pull request. Here are the steps: The preferred workflow for contributing to Camelot is to fork the `project repository`_ on GitHub, clone, develop on a branch and then finally submit a pull request. Here are the steps:
.. _project repository: https://github.com/camelot-dev/camelot .. _project repository: https://github.com/socialcopsdev/camelot
1. Fork the project repository. Click on the Fork button near the top of the page. This creates a copy of the code under your account on the GitHub. 1. Fork the project repository. Click on the Fork button near the top of the page. This creates a copy of the code under your account on the GitHub.
@ -134,7 +134,7 @@ Filing Issues
We use `GitHub issues`_ to keep track of all issues and pull requests. Before opening an issue (which asks a question or reports a bug), please use GitHub search to look for existing issues (both open and closed) that may be similar. We use `GitHub issues`_ to keep track of all issues and pull requests. Before opening an issue (which asks a question or reports a bug), please use GitHub search to look for existing issues (both open and closed) that may be similar.
.. _GitHub issues: https://github.com/camelot-dev/camelot/issues .. _GitHub issues: https://github.com/socialcopsdev/camelot/issues
Questions Questions
^^^^^^^^^ ^^^^^^^^^

View File

@ -8,15 +8,15 @@ Camelot: PDF Table Extraction for Humans
Release v\ |version|. (:ref:`Installation <install>`) Release v\ |version|. (:ref:`Installation <install>`)
.. image:: https://travis-ci.org/camelot-dev/camelot.svg?branch=master .. image:: https://travis-ci.org/socialcopsdev/camelot.svg?branch=master
:target: https://travis-ci.org/camelot-dev/camelot :target: https://travis-ci.org/socialcopsdev/camelot
.. image:: https://readthedocs.org/projects/camelot-py/badge/?version=master .. image:: https://readthedocs.org/projects/camelot-py/badge/?version=master
:target: https://camelot-py.readthedocs.io/en/master/ :target: https://camelot-py.readthedocs.io/en/master/
:alt: Documentation Status :alt: Documentation Status
.. image:: https://codecov.io/github/camelot-dev/camelot/badge.svg?branch=master&service=github .. image:: https://codecov.io/github/socialcopsdev/camelot/badge.svg?branch=master&service=github
:target: https://codecov.io/github/camelot-dev/camelot?branch=master :target: https://codecov.io/github/socialcopsdev/camelot?branch=master
.. image:: https://img.shields.io/pypi/v/camelot-py.svg .. image:: https://img.shields.io/pypi/v/camelot-py.svg
:target: https://pypi.org/project/camelot-py/ :target: https://pypi.org/project/camelot-py/
@ -30,21 +30,15 @@ Release v\ |version|. (:ref:`Installation <install>`)
.. image:: https://badges.gitter.im/camelot-dev/Lobby.png .. image:: https://badges.gitter.im/camelot-dev/Lobby.png
:target: https://gitter.im/camelot-dev/Lobby :target: https://gitter.im/camelot-dev/Lobby
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg **Camelot** is a Python library that makes it easy for *anyone* to extract tables from PDF files!
:target: https://github.com/ambv/black
.. image:: https://img.shields.io/badge/continous%20quality-deepsource-lightgrey .. note:: You can also check out `Excalibur`_, which is a web interface for Camelot!
:target: https://deepsource.io/gh/camelot-dev/camelot/?ref=repository-badge
**Camelot** is a Python library that can help you extract tables from PDFs!
.. note:: You can also check out `Excalibur`_, the web interface to Camelot!
.. _Excalibur: https://github.com/camelot-dev/excalibur .. _Excalibur: https://github.com/camelot-dev/excalibur
---- ----
**Here's how you can extract tables from PDFs.** You can check out the PDF used in this example `here`_. **Here's how you can extract tables from PDF files.** Check out the PDF used in this example `here`_.
.. _here: _static/pdf/foo.pdf .. _here: _static/pdf/foo.pdf
@ -54,7 +48,7 @@ Release v\ |version|. (:ref:`Installation <install>`)
>>> tables = camelot.read_pdf('foo.pdf') >>> tables = camelot.read_pdf('foo.pdf')
>>> tables >>> tables
<TableList n=1> <TableList n=1>
>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html, markdown, sqlite >>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html
>>> tables[0] >>> tables[0]
<Table shape=(7, 7)> <Table shape=(7, 7)>
>>> tables[0].parsing_report >>> tables[0].parsing_report
@ -64,44 +58,35 @@ Release v\ |version|. (:ref:`Installation <install>`)
'order': 1, 'order': 1,
'page': 1 'page': 1
} }
>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html, to_markdown, to_sqlite >>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html
>>> tables[0].df # get a pandas DataFrame! >>> tables[0].df # get a pandas DataFrame!
.. csv-table:: .. csv-table::
:file: _static/csv/foo.csv :file: _static/csv/foo.csv
Camelot also comes packaged with a :ref:`command-line interface <cli>`! There's a :ref:`command-line interface <cli>` too!
.. note:: Camelot only works with text-based PDFs and not scanned documents. (As Tabula `explains`_, "If you can click and drag to select text in your table in a PDF viewer, then your PDF is text-based".) .. note:: Camelot only works with text-based PDFs and not scanned documents. (As Tabula `explains`_, "If you can click and drag to select text in your table in a PDF viewer, then your PDF is text-based".)
You can check out some frequently asked questions :ref:`here <faq>`.
.. _explains: https://github.com/tabulapdf/tabula#why-tabula .. _explains: https://github.com/tabulapdf/tabula#why-tabula
Why Camelot? Why Camelot?
------------ ------------
- **Configurability**: Camelot gives you control over the table extraction process with :ref:`tweakable settings <advanced>`. - **You are in control.** Unlike other libraries and tools which either give a nice output or fail miserably (with no in-between), Camelot gives you the power to tweak table extraction. (This is important since everything in the real world, including PDF table extraction, is fuzzy.)
- **Metrics**: You can discard bad tables based on metrics like accuracy and whitespace, without having to manually look at each table. - *Bad* tables can be discarded based on **metrics** like accuracy and whitespace, without ever having to manually look at each table.
- **Output**: Each table is extracted into a **pandas DataFrame**, which seamlessly integrates into `ETL and data analysis workflows`_. You can also export tables to multiple formats, which include CSV, JSON, Excel, HTML, Markdown, and Sqlite. - Each table is a **pandas DataFrame**, which seamlessly integrates into `ETL and data analysis workflows`_.
- **Export** to multiple formats, including JSON, Excel and HTML.
See `comparison with other PDF table extraction libraries and tools`_.
.. _ETL and data analysis workflows: https://gist.github.com/vinayak-mehta/e5949f7c2410a0e12f25d3682dc9e873 .. _ETL and data analysis workflows: https://gist.github.com/vinayak-mehta/e5949f7c2410a0e12f25d3682dc9e873
.. _comparison with other PDF table extraction libraries and tools: https://github.com/socialcopsdev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools
See `comparison with similar libraries and tools`_.
.. _comparison with similar libraries and tools: https://github.com/camelot-dev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools
Support the development
-----------------------
If Camelot has helped you, please consider supporting its development with a one-time or monthly donation `on OpenCollective`_!
.. _on OpenCollective: https://opencollective.com/camelot
The User Guide The User Guide
-------------- --------------
This part of the documentation begins with some background information about why Camelot was created, takes you through some implementation details, and then focuses on step-by-step instructions for getting the most out of Camelot. This part of the documentation begins with some background information about why Camelot was created, takes a small dip into the implementation details and then focuses on step-by-step instructions for getting the most out of Camelot.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
@ -112,13 +97,13 @@ This part of the documentation begins with some background information about why
user/how-it-works user/how-it-works
user/quickstart user/quickstart
user/advanced user/advanced
user/faq
user/cli user/cli
The API Documentation/Guide The API Documentation/Guide
--------------------------- ---------------------------
If you are looking for information on a specific function, class, or method, this part of the documentation is for you. If you are looking for information on a specific function, class, or method,
this part of the documentation is for you.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
@ -128,7 +113,8 @@ If you are looking for information on a specific function, class, or method, thi
The Contributor Guide The Contributor Guide
--------------------- ---------------------
If you want to contribute to the project, this part of the documentation is for you. If you want to contribute to the project, this part of the documentation is for
you.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2

View File

@ -66,7 +66,8 @@ Let's plot all the text present on the table's PDF page.
:: ::
>>> camelot.plot(tables[0], kind='text').show() >>> camelot.plot(tables[0], kind='text')
>>> plt.show()
.. tip:: .. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`. Here's how you can do the same with the :ref:`command-line interface <cli>`.
@ -92,7 +93,8 @@ Let's plot the table (to see if it was detected correctly or not). This plot typ
:: ::
>>> camelot.plot(tables[0], kind='grid').show() >>> camelot.plot(tables[0], kind='grid')
>>> plt.show()
.. tip:: .. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`. Here's how you can do the same with the :ref:`command-line interface <cli>`.
@ -116,7 +118,8 @@ Now, let's plot all table boundaries present on the table's PDF page.
:: ::
>>> camelot.plot(tables[0], kind='contour').show() >>> camelot.plot(tables[0], kind='contour')
>>> plt.show()
.. tip:: .. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`. Here's how you can do the same with the :ref:`command-line interface <cli>`.
@ -138,7 +141,8 @@ Cool, let's plot all line segments present on the table's PDF page.
:: ::
>>> camelot.plot(tables[0], kind='line').show() >>> camelot.plot(tables[0], kind='line')
>>> plt.show()
.. tip:: .. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`. Here's how you can do the same with the :ref:`command-line interface <cli>`.
@ -160,7 +164,8 @@ Finally, let's plot all line intersections present on the table's PDF page.
:: ::
>>> camelot.plot(tables[0], kind='joint').show() >>> camelot.plot(tables[0], kind='joint')
>>> plt.show()
.. tip:: .. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`. Here's how you can do the same with the :ref:`command-line interface <cli>`.
@ -182,7 +187,8 @@ You can also visualize the textedges found on a page by specifying ``kind='texte
:: ::
>>> camelot.plot(tables[0], kind='textedge').show() >>> camelot.plot(tables[0], kind='textedge')
>>> plt.show()
.. tip:: .. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`. Here's how you can do the same with the :ref:`command-line interface <cli>`.
@ -200,10 +206,12 @@ You can also visualize the textedges found on a page by specifying ``kind='texte
Specify table areas Specify table areas
------------------- -------------------
In cases such as `these <../_static/pdf/table_areas.pdf>`__, it can be useful to specify exact table boundaries. You can plot the text on this page and note the top left and bottom right coordinates of the table. In cases such as `these <../_static/pdf/table_areas.pdf>`__, it can be useful to specify table boundaries. You can plot the text on this page and note the top left and bottom right coordinates of the table.
Table areas that you want Camelot to analyze can be passed as a list of comma-separated strings to :meth:`read_pdf() <camelot.read_pdf>`, using the ``table_areas`` keyword argument. Table areas that you want Camelot to analyze can be passed as a list of comma-separated strings to :meth:`read_pdf() <camelot.read_pdf>`, using the ``table_areas`` keyword argument.
.. _for now: https://github.com/socialcopsdev/camelot/issues/102
:: ::
>>> tables = camelot.read_pdf('table_areas.pdf', flavor='stream', table_areas=['316,499,566,337']) >>> tables = camelot.read_pdf('table_areas.pdf', flavor='stream', table_areas=['316,499,566,337'])
@ -218,29 +226,6 @@ Table areas that you want Camelot to analyze can be passed as a list of comma-se
.. csv-table:: .. csv-table::
:file: ../_static/csv/table_areas.csv :file: ../_static/csv/table_areas.csv
.. note:: ``table_areas`` accepts strings of the form x1,y1,x2,y2 where (x1, y1) -> top-left and (x2, y2) -> bottom-right in PDF coordinate space. In PDF coordinate space, the bottom-left corner of the page is the origin, with coordinates (0, 0).
Specify table regions
---------------------
However there may be cases like `[1] <../_static/pdf/table_regions.pdf>`__ and `[2] <https://github.com/camelot-dev/camelot/blob/master/tests/files/tableception.pdf>`__, where the table might not lie at the exact coordinates every time but in an approximate region.
You can use the ``table_regions`` keyword argument to :meth:`read_pdf() <camelot.read_pdf>` to solve for such cases. When ``table_regions`` is specified, Camelot will only analyze the specified regions to look for tables.
::
>>> tables = camelot.read_pdf('table_regions.pdf', table_regions=['170,370,560,270'])
>>> tables[0].df
.. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`.
::
$ camelot lattice -R 170,370,560,270 table_regions.pdf
.. csv-table::
:file: ../_static/csv/table_regions.csv
Specify column separators Specify column separators
------------------------- -------------------------
@ -310,7 +295,7 @@ In this case, the text that `other tools`_ return, will be ``24.912``. This is r
You can solve this by passing ``flag_size=True``, which will enclose the superscripts and subscripts with ``<s></s>``, based on font size, as shown below. You can solve this by passing ``flag_size=True``, which will enclose the superscripts and subscripts with ``<s></s>``, based on font size, as shown below.
.. _other tools: https://github.com/camelot-dev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools .. _other tools: https://github.com/socialcopsdev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools
:: ::
@ -334,7 +319,7 @@ You can solve this by passing ``flag_size=True``, which will enclose the supersc
Strip characters from text Strip characters from text
-------------------------- --------------------------
You can strip unwanted characters like spaces, dots and newlines from a string using the ``strip_text`` keyword argument. Take a look at `this PDF <https://github.com/camelot-dev/camelot/blob/master/tests/files/tabula/12s0324.pdf>`_ as an example, the text at the start of each row contains a lot of unwanted spaces, dots and newlines. You can strip unwanted characters like spaces, dots and newlines from a string using the ``strip_text`` keyword argument. Take a look at `this PDF <https://github.com/socialcopsdev/camelot/blob/master/tests/files/tabula/12s0324.pdf>`_ as an example, the text at the start of each row contains a lot of unwanted spaces, dots and newlines.
:: ::
@ -360,7 +345,7 @@ You can strip unwanted characters like spaces, dots and newlines from a string u
Improve guessed table areas Improve guessed table areas
--------------------------- ---------------------------
While using :ref:`Stream <stream>`, automatic table detection can fail for PDFs like `this one <https://github.com/camelot-dev/camelot/blob/master/tests/files/edge_tol.pdf>`_. That's because the text is relatively far apart vertically, which can lead to shorter textedges being calculated. While using :ref:`Stream <stream>`, automatic table detection can fail for PDFs like `this one <https://github.com/socialcopsdev/camelot/blob/master/tests/files/edge_tol.pdf>`_. That's because the text is relatively far apart vertically, which can lead to shorter textedges being calculated.
.. note:: To know more about how textedges are calculated to guess table areas, you can see pages 20, 35 and 40 of `Anssi Nurminen's master's thesis <http://dspace.cc.tut.fi/dpub/bitstream/handle/123456789/21520/Nurminen.pdf?sequence=3>`_. .. note:: To know more about how textedges are calculated to guess table areas, you can see pages 20, 35 and 40 of `Anssi Nurminen's master's thesis <http://dspace.cc.tut.fi/dpub/bitstream/handle/123456789/21520/Nurminen.pdf?sequence=3>`_.
@ -369,7 +354,8 @@ Let's see the table area that is detected by default.
:: ::
>>> tables = camelot.read_pdf('edge_tol.pdf', flavor='stream') >>> tables = camelot.read_pdf('edge_tol.pdf', flavor='stream')
>>> camelot.plot(tables[0], kind='contour').show() >>> camelot.plot(tables[0], kind='contour')
>>> plt.show()
.. tip:: .. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`. Here's how you can do the same with the :ref:`command-line interface <cli>`.
@ -389,7 +375,8 @@ To improve the detected area, you can increase the ``edge_tol`` (default: 50) va
:: ::
>>> tables = camelot.read_pdf('edge_tol.pdf', flavor='stream', edge_tol=500) >>> tables = camelot.read_pdf('edge_tol.pdf', flavor='stream', edge_tol=500)
>>> camelot.plot(tables[0], kind='contour').show() >>> camelot.plot(tables[0], kind='contour')
>>> plt.show()
.. tip:: .. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`. Here's how you can do the same with the :ref:`command-line interface <cli>`.
@ -447,11 +434,11 @@ You can pass ``row_tol=<+int>`` to group the rows closer together, as shown belo
Detect short lines Detect short lines
------------------ ------------------
There might be cases while using :ref:`Lattice <lattice>` when smaller lines don't get detected. The size of the smallest line that gets detected is calculated by dividing the PDF page's dimensions with a scaling factor called ``line_scale``. By default, its value is 15. There might be cases while using :ref:`Lattice <lattice>` when smaller lines don't get detected. The size of the smallest line that gets detected is calculated by dividing the PDF page's dimensions with a scaling factor called ``line_size_scaling``. By default, its value is 15.
As you can guess, the larger the ``line_scale``, the smaller the size of lines getting detected. As you can guess, the larger the ``line_size_scaling``, the smaller the size of lines getting detected.
.. warning:: Making ``line_scale`` very large (>150) will lead to text getting detected as lines. .. warning:: Making ``line_size_scaling`` very large (>150) will lead to text getting detected as lines.
Here's a `PDF <../_static/pdf/short_lines.pdf>`__ where small lines separating the the headers don't get detected with the default value of 15. Here's a `PDF <../_static/pdf/short_lines.pdf>`__ where small lines separating the the headers don't get detected with the default value of 15.
@ -464,18 +451,20 @@ Let's plot the table for this PDF.
:: ::
>>> tables = camelot.read_pdf('short_lines.pdf') >>> tables = camelot.read_pdf('short_lines.pdf')
>>> camelot.plot(tables[0], kind='grid').show() >>> camelot.plot(tables[0], kind='grid')
>>> plt.show()
.. figure:: ../_static/png/short_lines_1.png .. figure:: ../_static/png/short_lines_1.png
:alt: A plot of the PDF table with short lines :alt: A plot of the PDF table with short lines
:align: left :align: left
Clearly, the smaller lines separating the headers, couldn't be detected. Let's try with ``line_scale=40``, and plot the table again. Clearly, the smaller lines separating the headers, couldn't be detected. Let's try with ``line_size_scaling=40``, and plot the table again.
:: ::
>>> tables = camelot.read_pdf('short_lines.pdf', line_scale=40) >>> tables = camelot.read_pdf('short_lines.pdf', line_size_scaling=40)
>>> camelot.plot(tables[0], kind='grid').show() >>> camelot.plot(tables[0], kind='grid')
>>> plt.show()
.. tip:: .. tip::
Here's how you can do the same with the :ref:`command-line interface <cli>`. Here's how you can do the same with the :ref:`command-line interface <cli>`.
@ -522,7 +511,7 @@ We'll use the `PDF <../_static/pdf/short_lines.pdf>`__ from the previous example
:: ::
>>> tables = camelot.read_pdf('short_lines.pdf', line_scale=40, shift_text=['']) >>> tables = camelot.read_pdf('short_lines.pdf', line_size_scaling=40, shift_text=[''])
>>> tables[0].df >>> tables[0].df
.. csv-table:: .. csv-table::
@ -543,7 +532,7 @@ No surprises there — it did remain in place (observe the strings "2400" and "A
:: ::
>>> tables = camelot.read_pdf('short_lines.pdf', line_scale=40, shift_text=['r', 'b']) >>> tables = camelot.read_pdf('short_lines.pdf', line_size_scaling=40, shift_text=['r', 'b'])
>>> tables[0].df >>> tables[0].df
.. tip:: .. tip::
@ -616,33 +605,10 @@ We don't need anything else. Now, let's pass ``copy_text=['v']`` to copy text in
Tweak layout generation Tweak layout generation
----------------------- -----------------------
Camelot is built on top of PDFMiner's functionality of grouping characters on a page into words and sentences. In some cases (such as `#170 <https://github.com/camelot-dev/camelot/issues/170>`_ and `#215 <https://github.com/camelot-dev/camelot/issues/215>`_), PDFMiner can group characters that should belong to the same sentence into separate sentences. Camelot is built on top of PDFMiner's functionality of grouping characters on a page into words and sentences. In some cases (such as `#170 <https://github.com/socialcopsdev/camelot/issues/170>`_ and `#215 <https://github.com/socialcopsdev/camelot/issues/215>`_), PDFMiner can group characters that should belong to the same sentence into separate sentences.
To deal with such cases, you can tweak PDFMiner's `LAParams kwargs <https://github.com/euske/pdfminer/blob/master/pdfminer/layout.py#L33>`_ to improve layout generation, by passing the keyword arguments as a dict using ``layout_kwargs`` in :meth:`read_pdf() <camelot.read_pdf>`. To know more about the parameters you can tweak, you can check out `PDFMiner docs <https://pdfminersix.rtfd.io/en/latest/reference/composable.html>`_. To deal with such cases, you can tweak PDFMiner's `LAParams kwargs <https://github.com/euske/pdfminer/blob/master/pdfminer/layout.py#L33>`_ to improve layout generation, by passing the keyword arguments as a dict using ``layout_kwargs`` in :meth:`read_pdf() <camelot.read_pdf>`. To know more about the parameters you can tweak, you can check out `PDFMiner docs <https://euske.github.io/pdfminer/>`_.
:: ::
>>> tables = camelot.read_pdf('foo.pdf', layout_kwargs={'detect_vertical': False}) >>> tables = camelot.read_pdf('foo.pdf', layout_kwargs={'detect_vertical': False})
.. _image-conversion-backend:
Use alternate image conversion backends
---------------------------------------
When using the :ref:`Lattice <lattice>` flavor, Camelot uses ``ghostscript`` to convert PDF pages to images for line recognition. If you face installation issues with ``ghostscript``, you can use an alternate image conversion backend called ``poppler``. You can specify which image conversion backend you want to use with::
>>> tables = camelot.read_pdf(filename, backend="ghostscript") # default
>>> tables = camelot.read_pdf(filename, backend="poppler")
.. note:: ``ghostscript`` will be replaced by ``poppler`` as the default image conversion backend in ``v0.12.0``.
If you face issues with both ``ghostscript`` and ``poppler``, you can supply your own image conversion backend::
>>> class ConversionBackend(object):
>>> def convert(pdf_path, png_path):
>>> # read pdf page from pdf_path
>>> # convert pdf page to image
>>> # write image to png_path
>>> pass
>>>
>>> tables = camelot.read_pdf(filename, backend=ConversionBackend())

View File

@ -26,8 +26,6 @@ You can print the help for the interface by typing ``camelot --help`` in your fa
-split, --split_text Split text that spans across multiple cells. -split, --split_text Split text that spans across multiple cells.
-flag, --flag_size Flag text based on font size. Useful to -flag, --flag_size Flag text based on font size. Useful to
detect super/subscripts. detect super/subscripts.
-strip, --strip_text Characters that should be stripped from a
string before assigning it to a cell.
-M, --margins <FLOAT FLOAT FLOAT>... -M, --margins <FLOAT FLOAT FLOAT>...
PDFMiner char_margin, line_margin and PDFMiner char_margin, line_margin and
word_margin. word_margin.

View File

@ -1,70 +0,0 @@
.. _faq:
Frequently Asked Questions
==========================
This part of the documentation answers some common questions. To add questions, please open an issue `here <https://github.com/camelot-dev/camelot/issues/new>`_.
Does Camelot work with image-based PDFs?
----------------------------------------
**No**, Camelot only works with text-based PDFs and not scanned documents. (As Tabula `explains <https://github.com/tabulapdf/tabula#why-tabula>`_, "If you can click and drag to select text in your table in a PDF viewer, then your PDF is text-based".)
How to reduce memory usage for long PDFs?
-----------------------------------------
During table extraction from long PDF documents, RAM usage can grow significantly.
A simple workaround is to divide the extraction into chunks, and save extracted data to disk at the end of every chunk.
For more details, check out this code snippet from `@anakin87 <https://github.com/anakin87>`_:
::
import camelot
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i : i + n]
def extract_tables(filepath, pages, chunks=50, export_path=".", params={}):
"""
Divide the extraction work into n chunks. At the end of every chunk,
save data on disk and free RAM.
filepath : str
Filepath or URL of the PDF file.
pages : str, optional (default: '1')
Comma-separated page numbers.
Example: '1,3,4' or '1,4-end' or 'all'.
"""
# get list of pages from camelot.handlers.PDFHandler
handler = camelot.handlers.PDFHandler(filepath)
page_list = handler._get_pages(filepath, pages=pages)
# chunk pages list
page_chunks = list(chunks(page_list, chunks))
# extraction and export
for chunk in page_chunks:
pages_string = str(chunk).replace("[", "").replace("]", "")
tables = camelot.read_pdf(filepath, pages=pages_string, **params)
tables.export(f"{export_path}/tables.csv")
How can I supply my own image conversion backend to Lattice?
------------------------------------------------------------
When using the :ref:`Lattice <lattice>` flavor, you can supply your own :ref:`image conversion backend <image-conversion-backend>` by creating a class with a ``convert`` method as follows::
>>> class ConversionBackend(object):
>>> def convert(pdf_path, png_path):
>>> # read pdf page from pdf_path
>>> # convert pdf page to image
>>> # write image to png_path
>>> pass
>>>
>>> tables = camelot.read_pdf(filename, backend=ConversionBackend())

View File

@ -16,11 +16,11 @@ Stream can be used to parse tables that have whitespaces between cells to simula
1. Words on the PDF page are grouped into text rows based on their *y* axis overlaps. 1. Words on the PDF page are grouped into text rows based on their *y* axis overlaps.
2. Textedges are calculated and then used to guess interesting table areas on the PDF page. You can read `Anssi Nurminen's master's thesis <https://pdfs.semanticscholar.org/a9b1/67a86fb189bfcd366c3839f33f0404db9c10.pdf>`_ to know more about this table detection technique. [See pages 20, 35 and 40] 2. Textedges are calculated and then used to guess interesting table areas on the PDF page. You can read `Anssi Nurminen's master's thesis <http://dspace.cc.tut.fi/dpub/bitstream/handle/123456789/21520/Nurminen.pdf?sequence=3>`_ to know more about this table detection technique. [See pages 20, 35 and 40]
3. The number of columns inside each table area are then guessed. This is done by calculating the mode of number of words in each text row. Based on this mode, words in each text row are chosen to calculate a list of column *x* ranges. 3. The number of columns inside each table area are then guessed. This is done by calculating the mode of number of words in each text row. Based on this mode, words in each text row are chosen to calculate a list of column *x* ranges.
4. Words that lie inside/outside the current column *x* ranges are then used to extend the current list of columns. 4. Words that lie inside/outside the current column *x* ranges are then used to extend extend the current list of columns.
5. Finally, a table is formed using the text rows' *y* ranges and column *x* ranges and words found on the page are assigned to the table's cells based on their *x* and *y* coordinates. 5. Finally, a table is formed using the text rows' *y* ranges and column *x* ranges and words found on the page are assigned to the table's cells based on their *x* and *y* coordinates.

View File

@ -3,60 +3,74 @@
Installation of dependencies Installation of dependencies
============================ ============================
The dependencies `Ghostscript <https://www.ghostscript.com>`_ and `Tkinter <https://wiki.python.org/moin/TkInter>`_ can be installed using your system's package manager or by running their installer. The dependencies `Tkinter`_ and `ghostscript`_ can be installed using your system's package manager. You can run one of the following, based on your OS.
.. _Tkinter: https://wiki.python.org/moin/TkInter
.. _ghostscript: https://www.ghostscript.com
OS-specific instructions OS-specific instructions
------------------------ ------------------------
Ubuntu For Ubuntu
^^^^^^ ^^^^^^^^^^
:: ::
$ apt install ghostscript python3-tk $ apt install python-tk ghostscript
MacOS Or for Python 3::
^^^^^
$ apt install python3-tk ghostscript
For macOS
^^^^^^^^^
:: ::
$ brew install ghostscript tcl-tk $ brew install tcl-tk ghostscript
Windows For Windows
^^^^^^^ ^^^^^^^^^^^
For Ghostscript, you can get the installer at their `downloads page <https://www.ghostscript.com/download/gsdnld.html>`_. And for Tkinter, you can download the `ActiveTcl Community Edition <https://www.activestate.com/activetcl/downloads>`_ from ActiveState. For Tkinter, you can download the `ActiveTcl Community Edition`_ from ActiveState. For ghostscript, you can get the installer at the `ghostscript downloads page`_.
Checks to see if dependencies are installed correctly After installing ghostscript, you'll need to reboot your system to make sure that the ghostscript executable's path is in the windows PATH environment variable. In case you don't want to reboot, you can manually add the ghostscript executable's path to the PATH variable, `as shown here`_.
-----------------------------------------------------
You can run the following checks to see if the dependencies were installed correctly. .. _ActiveTcl Community Edition: https://www.activestate.com/activetcl/downloads
.. _ghostscript downloads page: https://www.ghostscript.com/download/gsdnld.html
.. _as shown here: https://java.com/en/download/help/path.xml
For Ghostscript Checks to see if dependencies were installed correctly
^^^^^^^^^^^^^^^ ------------------------------------------------------
Open the Python REPL and run the following: You can do the following checks to see if the dependencies were installed correctly.
For Ubuntu/MacOS::
>>> from ctypes.util import find_library
>>> find_library("gs")
"libgs.so.9"
For Windows::
>>> import ctypes
>>> from ctypes.util import find_library
>>> find_library("".join(("gsdll", str(ctypes.sizeof(ctypes.c_voidp) * 8), ".dll")))
<name-of-ghostscript-library-on-windows>
**Check:** The output of the ``find_library`` function should not be empty.
If the output is empty, then it's possible that the Ghostscript library is not available one of the ``LD_LIBRARY_PATH``/``DYLD_LIBRARY_PATH``/``PATH`` variables depending on your operating system. In this case, you may have to modify one of those path variables.
For Tkinter For Tkinter
^^^^^^^^^^^ ^^^^^^^^^^^
Launch Python and then import Tkinter:: Launch Python, and then at the prompt, type::
>>> import Tkinter
Or in Python 3::
>>> import tkinter >>> import tkinter
**Check:** Importing ``tkinter`` should not raise an import error. If you have Tkinter, Python will not print an error message, and if not, you will see an ``ImportError``.
For ghostscript
^^^^^^^^^^^^^^^
Run the following to check the ghostscript version.
For Ubuntu/macOS::
$ gs -version
For Windows::
C:\> gswin64c.exe -version
Or for Windows 32-bit::
C:\> gswin32c.exe -version
If you have ghostscript, you should see the ghostscript version and copyright information.

View File

@ -5,36 +5,43 @@ Installation of Camelot
This part of the documentation covers the steps to install Camelot. This part of the documentation covers the steps to install Camelot.
After :ref:`installing the dependencies <install_deps>`, which include `Ghostscript <https://www.ghostscript.com>`_ and `Tkinter <https://wiki.python.org/moin/TkInter>`_, you can use one of the following methods to install Camelot: Using conda
-----------
.. warning:: The ``lattice`` flavor will fail to run if Ghostscript is not installed. You may run into errors as shown in `issue #193 <https://github.com/camelot-dev/camelot/issues/193>`_. The easiest way to install Camelot is to install it with `conda`_, which is a package manager and environment management system for the `Anaconda`_ distribution.
::
pip
---
To install Camelot from PyPI using ``pip``, please include the extra ``cv`` requirement as shown::
$ pip install "camelot-py[base]"
conda
-----
`conda`_ is a package manager and environment management system for the `Anaconda <https://anaconda.org>`_ distribution. It can be used to install Camelot from the ``conda-forge`` channel::
$ conda install -c conda-forge camelot-py $ conda install -c conda-forge camelot-py
.. note:: Camelot is available for Python 2.7, 3.5 and 3.6 on Linux, macOS and Windows. For Windows, you will need to install ghostscript which you can get from their `downloads page`_.
.. _conda: https://conda.io/docs/
.. _Anaconda: http://docs.continuum.io/anaconda/
.. _downloads page: https://www.ghostscript.com/download/gsdnld.html
.. _conda-forge: https://conda-forge.org/
Using pip
---------
After :ref:`installing the dependencies <install_deps>`, which include `Tkinter`_ and `ghostscript`_, you can simply use pip to install Camelot::
$ pip install camelot-py[cv]
.. _Tkinter: https://wiki.python.org/moin/TkInter
.. _ghostscript: https://www.ghostscript.com
From the source code From the source code
-------------------- --------------------
After :ref:`installing the dependencies <install_deps>`, you can install Camelot from source by: After :ref:`installing the dependencies <install_deps>`, you can install from the source by:
1. Cloning the GitHub repository. 1. Cloning the GitHub repository.
:: ::
$ git clone https://www.github.com/camelot-dev/camelot $ git clone https://www.github.com/socialcopsdev/camelot
2. And then simply using pip again. 2. Then simply using pip again.
:: ::
$ cd camelot $ cd camelot
$ pip install ".[base]" $ pip install ".[cv]"

View File

@ -27,7 +27,7 @@ Here is a `comparison`_ of Camelot's output with outputs from other open-source
.. _pdf-table-extract: https://github.com/ashima/pdf-table-extract .. _pdf-table-extract: https://github.com/ashima/pdf-table-extract
.. _PDFTables: https://pdftables.com/ .. _PDFTables: https://pdftables.com/
.. _Smallpdf: https://smallpdf.com .. _Smallpdf: https://smallpdf.com
.. _comparison: https://github.com/camelot-dev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools .. _comparison: https://github.com/socialcopsdev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools
What's in a name? What's in a name?
----------------- -----------------

View File

@ -14,7 +14,7 @@ Begin by importing the Camelot module::
>>> import camelot >>> import camelot
Now, let's try to read a PDF. (You can check out the PDF used in this example `here`_.) Since the PDF has a table with clearly demarcated lines, we will use the :ref:`Lattice <lattice>` method here. Now, let's try to read a PDF. (You can check out the PDF used in this example `here`_.) Since the PDF has a table with clearly demarcated lines, we will use the :ref:`Lattice <lattice>` method here. To do that, we will set the ``mesh`` keyword argument to ``True``.
.. note:: :ref:`Lattice <lattice>` is used by default. You can use :ref:`Stream <stream>` with ``flavor='stream'``. .. note:: :ref:`Lattice <lattice>` is used by default. You can use :ref:`Stream <stream>` with ``flavor='stream'``.
@ -56,7 +56,7 @@ Woah! The accuracy is top-notch and there is less whitespace, which means the ta
.. csv-table:: .. csv-table::
:file: ../_static/csv/foo.csv :file: ../_static/csv/foo.csv
Looks good! You can now export the table as a CSV file using its :meth:`to_csv() <camelot.core.Table.to_csv>` method. Alternatively you can use :meth:`to_json() <camelot.core.Table.to_json>`, :meth:`to_excel() <camelot.core.Table.to_excel>` :meth:`to_html() <camelot.core.Table.to_html>` :meth:`to_markdown() <camelot.core.Table.to_markdown>` or :meth:`to_sqlite() <camelot.core.Table.to_sqlite>` methods to export the table as JSON, Excel, HTML files or a sqlite database respectively. Looks good! You can now export the table as a CSV file using its :meth:`to_csv() <camelot.core.Table.to_csv>` method. Alternatively you can use :meth:`to_json() <camelot.core.Table.to_json>`, :meth:`to_excel() <camelot.core.Table.to_excel>` or :meth:`to_html() <camelot.core.Table.to_html>` methods to export the table as JSON, Excel and HTML files respectively.
:: ::
@ -76,7 +76,7 @@ You can also export all tables at once, using the :class:`tables <camelot.core.T
$ camelot --format csv --output foo.csv lattice foo.pdf $ camelot --format csv --output foo.csv lattice foo.pdf
This will export all tables as CSV files at the path specified. Alternatively, you can use ``f='json'``, ``f='excel'``, ``f='html'``, ``f='markdown'`` or ``f='sqlite'``. This will export all tables as CSV files at the path specified. Alternatively, you can use ``f='json'``, ``f='excel'`` or ``f='html'``.
.. note:: The :meth:`export() <camelot.core.TableList.export>` method exports files with a ``page-*-table-*`` suffix. In the example above, the single table in the list will be exported to ``foo-page-1-table-1.csv``. If the list contains multiple tables, multiple CSV files will be created. To avoid filling up your path with multiple files, you can use ``compress=True``, which will create a single ZIP file at your path with all the CSV files. .. note:: The :meth:`export() <camelot.core.TableList.export>` method exports files with a ``page-*-table-*`` suffix. In the example above, the single table in the list will be exported to ``foo-page-1-table-1.csv``. If the list contains multiple tables, multiple CSV files will be created. To avoid filling up your path with multiple files, you can use ``compress=True``, which will create a single ZIP file at your path with all the CSV files.

8
requirements.txt 100755
View File

@ -0,0 +1,8 @@
click>=6.7
matplotlib>=2.2.3
numpy>=1.13.3
opencv-python>=3.4.2.17
openpyxl>=2.5.8
pandas>=0.23.4
pdfminer.six>=20170720
PyPDF2>=1.26.0

106
setup.py
View File

@ -6,78 +6,76 @@ from setuptools import find_packages
here = os.path.abspath(os.path.dirname(__file__)) here = os.path.abspath(os.path.dirname(__file__))
about = {} about = {}
with open(os.path.join(here, "camelot", "__version__.py"), "r") as f: with open(os.path.join(here, 'camelot', '__version__.py'), 'r') as f:
exec(f.read(), about) exec(f.read(), about)
with open("README.md", "r") as f: with open('README.md', 'r') as f:
readme = f.read() readme = f.read()
requires = [ requires = [
"chardet>=3.0.4", 'chardet>=3.0.4',
"click>=6.7", 'click>=6.7',
"numpy>=1.13.3", 'numpy>=1.13.3',
"openpyxl>=2.5.8", 'openpyxl>=2.5.8',
"pandas>=0.23.4", 'pandas>=0.23.4',
"pdfminer.six>=20200726", 'pdfminer.six>=20170720',
"PyPDF2>=1.26.0", 'PyPDF2>=1.26.0'
"tabulate>=0.8.9",
] ]
base_requires = ["ghostscript>=0.7", "opencv-python>=3.4.2.17", "pdftopng>=0.2.3"] cv_requires = [
'opencv-python>=3.4.2.17'
]
plot_requires = [ plot_requires = [
"matplotlib>=2.2.3", 'matplotlib>=2.2.3',
] ]
dev_requires = [ dev_requires = [
"codecov>=2.0.15", 'codecov>=2.0.15',
"pytest>=5.4.3", 'pytest>=3.8.0',
"pytest-cov>=2.10.0", 'pytest-cov>=2.6.0',
"pytest-mpl>=0.11", 'pytest-mpl>=0.10',
"pytest-runner>=5.2", 'pytest-runner>=4.2',
"Sphinx>=3.1.2", 'Sphinx>=1.7.9'
"sphinx-autobuild>=2021.3.14",
] ]
all_requires = base_requires + plot_requires all_requires = cv_requires + plot_requires
dev_requires = dev_requires + all_requires dev_requires = dev_requires + all_requires
def setup_package(): def setup_package():
metadata = dict( metadata = dict(name=about['__title__'],
name=about["__title__"], version=about['__version__'],
version=about["__version__"], description=about['__description__'],
description=about["__description__"], long_description=readme,
long_description=readme, long_description_content_type="text/markdown",
long_description_content_type="text/markdown", url=about['__url__'],
url=about["__url__"], author=about['__author__'],
author=about["__author__"], author_email=about['__author_email__'],
author_email=about["__author_email__"], license=about['__license__'],
license=about["__license__"], packages=find_packages(exclude=('tests',)),
packages=find_packages(exclude=("tests",)), install_requires=requires,
install_requires=requires, extras_require={
extras_require={ 'all': all_requires,
"all": all_requires, 'cv': cv_requires,
"base": base_requires, 'dev': dev_requires,
"cv": base_requires, # deprecate 'plot': plot_requires
"dev": dev_requires, },
"plot": plot_requires, entry_points={
}, 'console_scripts': [
entry_points={ 'camelot = camelot.cli:cli',
"console_scripts": [ ],
"camelot = camelot.cli:cli", },
], classifiers=[
}, # Trove classifiers
classifiers=[ # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
# Trove classifiers 'License :: OSI Approved :: MIT License',
# Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers 'Programming Language :: Python :: 2.7',
"License :: OSI Approved :: MIT License", 'Programming Language :: Python :: 3.5',
"Programming Language :: Python :: 3.6", 'Programming Language :: Python :: 3.6',
"Programming Language :: Python :: 3.7", 'Programming Language :: Python :: 3.7'
"Programming Language :: Python :: 3.8", ])
],
)
try: try:
from setuptools import setup from setuptools import setup
@ -87,5 +85,5 @@ def setup_package():
setup(**metadata) setup(**metadata)
if __name__ == "__main__": if __name__ == '__main__':
setup_package() setup_package()

View File

@ -1,3 +1,2 @@
import matplotlib import matplotlib
matplotlib.use('agg')
matplotlib.use("agg")

File diff suppressed because it is too large Load Diff

View File

Before

Width:  |  Height:  |  Size: 8.2 KiB

After

Width:  |  Height:  |  Size: 8.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

View File

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.7 KiB

After

Width:  |  Height:  |  Size: 6.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.9 KiB

After

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1,9 +1,7 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
import sys
import pytest
from click.testing import CliRunner from click.testing import CliRunner
from camelot.cli import cli from camelot.cli import cli
@ -11,181 +9,109 @@ from camelot.utils import TemporaryDirectory
testdir = os.path.dirname(os.path.abspath(__file__)) testdir = os.path.dirname(os.path.abspath(__file__))
testdir = os.path.join(testdir, "files") testdir = os.path.join(testdir, 'files')
skip_on_windows = pytest.mark.skipif(
sys.platform.startswith("win"),
reason="Ghostscript not installed in Windows test environment",
)
def test_help_output():
runner = CliRunner()
prog_name = runner.get_default_prog_name(cli)
result = runner.invoke(cli, ["--help"])
output = result.output
assert prog_name == "camelot"
assert result.output.startswith("Usage: %(prog_name)s [OPTIONS] COMMAND" % locals())
assert all(
v in result.output
for v in ["Options:", "--version", "--help", "Commands:", "lattice", "stream"]
)
@skip_on_windows
def test_cli_lattice(): def test_cli_lattice():
with TemporaryDirectory() as tempdir: with TemporaryDirectory() as tempdir:
infile = os.path.join(testdir, "foo.pdf") infile = os.path.join(testdir, 'foo.pdf')
outfile = os.path.join(tempdir, "foo.csv") outfile = os.path.join(tempdir, 'foo.csv')
runner = CliRunner() runner = CliRunner()
result = runner.invoke( result = runner.invoke(cli, ['--format', 'csv', '--output', outfile,
cli, ["--format", "csv", "--output", outfile, "lattice", infile] 'lattice', infile])
)
assert result.exit_code == 0 assert result.exit_code == 0
assert "Found 1 tables" in result.output assert result.output == 'Found 1 tables\n'
result = runner.invoke(cli, ["--format", "csv", "lattice", infile]) result = runner.invoke(cli, ['--format', 'csv',
output_error = "Error: Please specify output file path using --output" 'lattice', infile])
output_error = 'Error: Please specify output file path using --output'
assert output_error in result.output assert output_error in result.output
result = runner.invoke(cli, ["--output", outfile, "lattice", infile]) result = runner.invoke(cli, ['--output', outfile,
format_error = "Please specify output file format using --format" 'lattice', infile])
format_error = 'Please specify output file format using --format'
assert format_error in result.output assert format_error in result.output
def test_cli_stream(): def test_cli_stream():
with TemporaryDirectory() as tempdir: with TemporaryDirectory() as tempdir:
infile = os.path.join(testdir, "budget.pdf") infile = os.path.join(testdir, 'budget.pdf')
outfile = os.path.join(tempdir, "budget.csv") outfile = os.path.join(tempdir, 'budget.csv')
runner = CliRunner() runner = CliRunner()
result = runner.invoke( result = runner.invoke(cli, ['--format', 'csv', '--output', outfile,
cli, ["--format", "csv", "--output", outfile, "stream", infile] 'stream', infile])
)
assert result.exit_code == 0 assert result.exit_code == 0
assert result.output == "Found 1 tables\n" assert result.output == 'Found 1 tables\n'
result = runner.invoke(cli, ["--format", "csv", "stream", infile]) result = runner.invoke(cli, ['--format', 'csv', 'stream', infile])
output_error = "Error: Please specify output file path using --output" output_error = 'Error: Please specify output file path using --output'
assert output_error in result.output assert output_error in result.output
result = runner.invoke(cli, ["--output", outfile, "stream", infile]) result = runner.invoke(cli, ['--output', outfile, 'stream', infile])
format_error = "Please specify output file format using --format" format_error = 'Please specify output file format using --format'
assert format_error in result.output assert format_error in result.output
def test_cli_password(): def test_cli_password():
with TemporaryDirectory() as tempdir: with TemporaryDirectory() as tempdir:
infile = os.path.join(testdir, "health_protected.pdf") infile = os.path.join(testdir, 'health_protected.pdf')
outfile = os.path.join(tempdir, "health_protected.csv") outfile = os.path.join(tempdir, 'health_protected.csv')
runner = CliRunner() runner = CliRunner()
result = runner.invoke( result = runner.invoke(cli, ['--password', 'userpass',
cli, '--format', 'csv', '--output', outfile,
[ 'stream', infile])
"--password",
"userpass",
"--format",
"csv",
"--output",
outfile,
"stream",
infile,
],
)
assert result.exit_code == 0 assert result.exit_code == 0
assert result.output == "Found 1 tables\n" assert result.output == 'Found 1 tables\n'
output_error = "file has not been decrypted" output_error = 'file has not been decrypted'
# no password # no password
result = runner.invoke( result = runner.invoke(cli, ['--format', 'csv', '--output', outfile,
cli, ["--format", "csv", "--output", outfile, "stream", infile] 'stream', infile])
)
assert output_error in str(result.exception) assert output_error in str(result.exception)
# bad password # bad password
result = runner.invoke( result = runner.invoke(cli, ['--password', 'wrongpass',
cli, '--format', 'csv', '--output', outfile,
[ 'stream', infile])
"--password",
"wrongpass",
"--format",
"csv",
"--output",
outfile,
"stream",
infile,
],
)
assert output_error in str(result.exception) assert output_error in str(result.exception)
def test_cli_output_format(): def test_cli_output_format():
with TemporaryDirectory() as tempdir: with TemporaryDirectory() as tempdir:
infile = os.path.join(testdir, "health.pdf") infile = os.path.join(testdir, 'health.pdf')
outfile = os.path.join(tempdir, 'health.{}')
runner = CliRunner() runner = CliRunner()
# json # json
outfile = os.path.join(tempdir, "health.json") result = runner.invoke(cli, ['--format', 'json', '--output', outfile.format('json'),
result = runner.invoke( 'stream', infile])
cli, assert result.exit_code == 0
["--format", "json", "--output", outfile, "stream", infile],
)
assert result.exit_code == 0, f"Output: {result.output}"
# excel # excel
outfile = os.path.join(tempdir, "health.xlsx") result = runner.invoke(cli, ['--format', 'excel', '--output', outfile.format('xlsx'),
result = runner.invoke( 'stream', infile])
cli, assert result.exit_code == 0
["--format", "excel", "--output", outfile, "stream", infile],
)
assert result.exit_code == 0, f"Output: {result.output}"
# html # html
outfile = os.path.join(tempdir, "health.html") result = runner.invoke(cli, ['--format', 'html', '--output', outfile.format('html'),
result = runner.invoke( 'stream', infile])
cli, assert result.exit_code == 0
["--format", "html", "--output", outfile, "stream", infile],
)
assert result.exit_code == 0, f"Output: {result.output}"
# markdown
outfile = os.path.join(tempdir, "health.md")
result = runner.invoke(
cli,
["--format", "markdown", "--output", outfile, "stream", infile],
)
assert result.exit_code == 0, f"Output: {result.output}"
# zip # zip
outfile = os.path.join(tempdir, "health.csv") result = runner.invoke(cli, ['--zip', '--format', 'csv', '--output', outfile.format('csv'),
result = runner.invoke( 'stream', infile])
cli, assert result.exit_code == 0
[
"--zip",
"--format",
"csv",
"--output",
outfile,
"stream",
infile,
],
)
assert result.exit_code == 0, f"Output: {result.output}"
def test_cli_quiet(): def test_cli_quiet():
with TemporaryDirectory() as tempdir: with TemporaryDirectory() as tempdir:
infile = os.path.join(testdir, "empty.pdf") infile = os.path.join(testdir, 'blank.pdf')
outfile = os.path.join(tempdir, "empty.csv") outfile = os.path.join(tempdir, 'blank.csv')
runner = CliRunner() runner = CliRunner()
result = runner.invoke( result = runner.invoke(cli, ['--format', 'csv', '--output', outfile,
cli, ["--format", "csv", "--output", outfile, "stream", infile] 'stream', infile])
) assert 'No tables found on page-1' in result.output
assert "No tables found on page-1" in result.output
result = runner.invoke( result = runner.invoke(cli, ['--quiet', '--format', 'csv',
cli, ["--quiet", "--format", "csv", "--output", outfile, "stream", infile] '--output', outfile, 'stream', infile])
) assert 'No tables found on page-1' not in result.output
assert "No tables found on page-1" not in result.output

View File

@ -1,47 +1,24 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
import sys
import pytest
import pandas as pd import pandas as pd
from pandas.testing import assert_frame_equal
import camelot import camelot
from camelot.io import PDFHandler
from camelot.core import Table, TableList
from camelot.__version__ import generate_version
from camelot.backends import ImageConversionBackend
from .data import * from .data import *
testdir = os.path.dirname(os.path.abspath(__file__)) testdir = os.path.dirname(os.path.abspath(__file__))
testdir = os.path.join(testdir, "files") testdir = os.path.join(testdir, "files")
skip_on_windows = pytest.mark.skipif(
sys.platform.startswith("win"),
reason="Ghostscript not installed in Windows test environment",
)
def test_version_generation():
version = (0, 7, 3)
assert generate_version(version, prerelease=None, revision=None) == "0.7.3"
def test_version_generation_with_prerelease_revision():
version = (0, 7, 3)
prerelease = "alpha"
revision = 2
assert (
generate_version(version, prerelease=prerelease, revision=revision)
== "0.7.3-alpha.2"
)
@skip_on_windows
def test_parsing_report(): def test_parsing_report():
parsing_report = {"accuracy": 99.02, "whitespace": 12.24, "order": 1, "page": 1} parsing_report = {
'accuracy': 99.02,
'whitespace': 12.24,
'order': 1,
'page': 1
}
filename = os.path.join(testdir, "foo.pdf") filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename) tables = camelot.read_pdf(filename)
@ -53,122 +30,194 @@ def test_password():
filename = os.path.join(testdir, "health_protected.pdf") filename = os.path.join(testdir, "health_protected.pdf")
tables = camelot.read_pdf(filename, password="ownerpass", flavor="stream") tables = camelot.read_pdf(filename, password="ownerpass", flavor="stream")
assert_frame_equal(df, tables[0].df) assert df.equals(tables[0].df)
tables = camelot.read_pdf(filename, password="userpass", flavor="stream") tables = camelot.read_pdf(filename, password="userpass", flavor="stream")
assert_frame_equal(df, tables[0].df) assert df.equals(tables[0].df)
def test_repr_poppler(): def test_stream():
df = pd.DataFrame(data_stream)
filename = os.path.join(testdir, "health.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
assert df.equals(tables[0].df)
def test_stream_table_rotated():
df = pd.DataFrame(data_stream_table_rotated)
filename = os.path.join(testdir, "clockwise_table_2.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
assert df.equals(tables[0].df)
filename = os.path.join(testdir, "anticlockwise_table_2.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
assert df.equals(tables[0].df)
def test_stream_two_tables():
df1 = pd.DataFrame(data_stream_two_tables_1)
df2 = pd.DataFrame(data_stream_two_tables_2)
filename = os.path.join(testdir, "tabula/12s0324.pdf")
tables = camelot.read_pdf(filename, flavor='stream')
assert len(tables) == 2
assert df1.equals(tables[0].df)
assert df2.equals(tables[1].df)
def test_stream_table_areas():
df = pd.DataFrame(data_stream_table_areas)
filename = os.path.join(testdir, "tabula/us-007.pdf")
tables = camelot.read_pdf(filename, flavor="stream", table_areas=["320,500,573,335"])
assert df.equals(tables[0].df)
def test_stream_columns():
df = pd.DataFrame(data_stream_columns)
filename = os.path.join(testdir, "mexican_towns.pdf")
tables = camelot.read_pdf(
filename, flavor="stream", columns=["67,180,230,425,475"], row_tol=10)
assert df.equals(tables[0].df)
def test_stream_split_text():
df = pd.DataFrame(data_stream_split_text)
filename = os.path.join(testdir, "tabula/m27.pdf")
tables = camelot.read_pdf(
filename, flavor="stream", columns=["72,95,209,327,442,529,566,606,683"], split_text=True)
assert df.equals(tables[0].df)
def test_stream_flag_size():
df = pd.DataFrame(data_stream_flag_size)
filename = os.path.join(testdir, "superscript.pdf")
tables = camelot.read_pdf(filename, flavor="stream", flag_size=True)
assert df.equals(tables[0].df)
def test_stream_strip_text():
df = pd.DataFrame(data_stream_strip_text)
filename = os.path.join(testdir, "detect_vertical_false.pdf")
tables = camelot.read_pdf(filename, flavor="stream", strip_text="\n")
assert df.equals(tables[0].df)
def test_stream_edge_tol():
df = pd.DataFrame(data_stream_edge_tol)
filename = os.path.join(testdir, "edge_tol.pdf")
tables = camelot.read_pdf(filename, flavor="stream", edge_tol=500)
assert df.equals(tables[0].df)
def test_stream_layout_kwargs():
df = pd.DataFrame(data_stream_layout_kwargs)
filename = os.path.join(testdir, "detect_vertical_false.pdf")
tables = camelot.read_pdf(
filename, flavor="stream", layout_kwargs={"detect_vertical": False})
assert df.equals(tables[0].df)
def test_lattice():
df = pd.DataFrame(data_lattice)
filename = os.path.join(
testdir, "tabula/icdar2013-dataset/competition-dataset-us/us-030.pdf")
tables = camelot.read_pdf(filename, pages="2")
assert df.equals(tables[0].df)
def test_lattice_table_rotated():
df = pd.DataFrame(data_lattice_table_rotated)
filename = os.path.join(testdir, "clockwise_table_1.pdf")
tables = camelot.read_pdf(filename)
assert df.equals(tables[0].df)
filename = os.path.join(testdir, "anticlockwise_table_1.pdf")
tables = camelot.read_pdf(filename)
assert df.equals(tables[0].df)
def test_lattice_two_tables():
df1 = pd.DataFrame(data_lattice_two_tables_1)
df2 = pd.DataFrame(data_lattice_two_tables_2)
filename = os.path.join(testdir, "twotables_2.pdf")
tables = camelot.read_pdf(filename)
assert len(tables) == 2
assert df1.equals(tables[0].df)
assert df2.equals(tables[1].df)
def test_lattice_table_areas():
df = pd.DataFrame(data_lattice_table_areas)
filename = os.path.join(testdir, "twotables_2.pdf")
tables = camelot.read_pdf(filename, table_areas=["80,693,535,448"])
assert df.equals(tables[0].df)
def test_lattice_process_background():
df = pd.DataFrame(data_lattice_process_background)
filename = os.path.join(testdir, "background_lines_1.pdf")
tables = camelot.read_pdf(filename, process_background=True)
assert df.equals(tables[1].df)
def test_lattice_copy_text():
df = pd.DataFrame(data_lattice_copy_text)
filename = os.path.join(testdir, "row_span_1.pdf")
tables = camelot.read_pdf(filename, line_size_scaling=60, copy_text="v")
assert df.equals(tables[0].df)
def test_lattice_shift_text():
df_lt = pd.DataFrame(data_lattice_shift_text_left_top)
df_disable = pd.DataFrame(data_lattice_shift_text_disable)
df_rb = pd.DataFrame(data_lattice_shift_text_right_bottom)
filename = os.path.join(testdir, "column_span_2.pdf")
tables = camelot.read_pdf(filename, line_size_scaling=40)
assert df_lt.equals(tables[0].df)
tables = camelot.read_pdf(filename, line_size_scaling=40, shift_text=[''])
assert df_disable.equals(tables[0].df)
tables = camelot.read_pdf(filename, line_size_scaling=40, shift_text=['r', 'b'])
assert df_rb.equals(tables[0].df)
def test_repr():
filename = os.path.join(testdir, "foo.pdf") filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="poppler") tables = camelot.read_pdf(filename)
assert repr(tables) == "<TableList n=1>" assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>" assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=219 x2=165 y2=234>" assert repr(tables[0].cells[0][0]) == "<Cell x1=120.48 y1=218.43 x2=164.64 y2=233.77>"
@skip_on_windows def test_url():
def test_repr_ghostscript():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="ghostscript")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=218 x2=165 y2=234>"
def test_url_poppler():
url = "https://camelot-py.readthedocs.io/en/master/_static/pdf/foo.pdf" url = "https://camelot-py.readthedocs.io/en/master/_static/pdf/foo.pdf"
tables = camelot.read_pdf(url, backend="poppler") tables = camelot.read_pdf(url)
assert repr(tables) == "<TableList n=1>" assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>" assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=219 x2=165 y2=234>" assert repr(tables[0].cells[0][0]) == "<Cell x1=120.48 y1=218.43 x2=164.64 y2=233.77>"
@skip_on_windows def test_arabic():
def test_url_ghostscript(): df = pd.DataFrame(data_arabic)
url = "https://camelot-py.readthedocs.io/en/master/_static/pdf/foo.pdf"
tables = camelot.read_pdf(url, backend="ghostscript")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=218 x2=165 y2=234>"
filename = os.path.join(testdir, "tabula/arabic.pdf")
def test_pages_poppler(): tables = camelot.read_pdf(filename)
url = "https://camelot-py.readthedocs.io/en/master/_static/pdf/foo.pdf" assert df.equals(tables[0].df)
tables = camelot.read_pdf(url, backend="poppler")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=219 x2=165 y2=234>"
tables = camelot.read_pdf(url, pages="1-end", backend="poppler")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=219 x2=165 y2=234>"
tables = camelot.read_pdf(url, pages="all", backend="poppler")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=219 x2=165 y2=234>"
@skip_on_windows
def test_pages_ghostscript():
url = "https://camelot-py.readthedocs.io/en/master/_static/pdf/foo.pdf"
tables = camelot.read_pdf(url, backend="ghostscript")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=218 x2=165 y2=234>"
tables = camelot.read_pdf(url, pages="1-end", backend="ghostscript")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=218 x2=165 y2=234>"
tables = camelot.read_pdf(url, pages="all", backend="ghostscript")
assert repr(tables) == "<TableList n=1>"
assert repr(tables[0]) == "<Table shape=(7, 7)>"
assert repr(tables[0].cells[0][0]) == "<Cell x1=120 y1=218 x2=165 y2=234>"
def test_table_order():
def _make_table(page, order):
t = Table([], [])
t.page = page
t.order = order
return t
table_list = TableList(
[_make_table(2, 1), _make_table(1, 1), _make_table(3, 4), _make_table(1, 2)]
)
assert [(t.page, t.order) for t in sorted(table_list)] == [
(1, 1),
(1, 2),
(2, 1),
(3, 4),
]
assert [(t.page, t.order) for t in sorted(table_list, reverse=True)] == [
(3, 4),
(2, 1),
(1, 2),
(1, 1),
]
def test_handler_pages_generator():
filename = os.path.join(testdir, "foo.pdf")
handler = PDFHandler(filename)
assert handler._get_pages("1") == [1]
handler = PDFHandler(filename)
assert handler._get_pages("all") == [1]
handler = PDFHandler(filename)
assert handler._get_pages("1-end") == [1]
handler = PDFHandler(filename)
assert handler._get_pages("1,2,3,4") == [1, 2, 3, 4]
handler = PDFHandler(filename)
assert handler._get_pages("1,2,5-10") == [1, 2, 5, 6, 7, 8, 9, 10]

View File

@ -1,7 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
import sys
import warnings import warnings
import pytest import pytest
@ -11,145 +10,94 @@ import camelot
testdir = os.path.dirname(os.path.abspath(__file__)) testdir = os.path.dirname(os.path.abspath(__file__))
testdir = os.path.join(testdir, "files") testdir = os.path.join(testdir, "files")
filename = os.path.join(testdir, "foo.pdf") filename = os.path.join(testdir, 'foo.pdf')
skip_on_windows = pytest.mark.skipif(
sys.platform.startswith("win"),
reason="Ghostscript not installed in Windows test environment",
)
def test_unknown_flavor(): def test_unknown_flavor():
message = "Unknown flavor specified." " Use either 'lattice' or 'stream'" message = ("Unknown flavor specified."
with pytest.raises(NotImplementedError, match=message): " Use either 'lattice' or 'stream'")
tables = camelot.read_pdf(filename, flavor="chocolate") with pytest.raises(NotImplementedError, message=message):
tables = camelot.read_pdf(filename, flavor='chocolate')
def test_input_kwargs(): def test_input_kwargs():
message = "columns cannot be used with flavor='lattice'" message = "columns cannot be used with flavor='lattice'"
with pytest.raises(ValueError, match=message): with pytest.raises(ValueError, message=message):
tables = camelot.read_pdf(filename, columns=["10,20,30,40"]) tables = camelot.read_pdf(filename, columns=['10,20,30,40'])
def test_unsupported_format(): def test_unsupported_format():
message = "File format not supported" message = 'File format not supported'
filename = os.path.join(testdir, "foo.csv") filename = os.path.join(testdir, 'foo.csv')
with pytest.raises(NotImplementedError, match=message): with pytest.raises(NotImplementedError, message=message):
tables = camelot.read_pdf(filename) tables = camelot.read_pdf(filename)
@skip_on_windows def test_stream_equal_length():
message = ("Length of table_areas and columns"
" should be equal")
with pytest.raises(ValueError, message=message):
tables = camelot.read_pdf(filename, flavor='stream',
table_areas=['10,20,30,40'], columns=['10,20,30,40', '10,20,30,40'])
def test_no_tables_found():
filename = os.path.join(testdir, 'blank.pdf')
with warnings.catch_warnings():
warnings.simplefilter('error')
with pytest.raises(UserWarning) as e:
tables = camelot.read_pdf(filename)
assert str(e.value) == 'No tables found on page-1'
def test_no_tables_found_logs_suppressed(): def test_no_tables_found_logs_suppressed():
filename = os.path.join(testdir, "foo.pdf") filename = os.path.join(testdir, 'foo.pdf')
with warnings.catch_warnings(): with warnings.catch_warnings():
# the test should fail if any warning is thrown # the test should fail if any warning is thrown
warnings.simplefilter("error") warnings.simplefilter('error')
try: try:
tables = camelot.read_pdf(filename, suppress_stdout=True) tables = camelot.read_pdf(filename, suppress_stdout=True)
except Warning as e: except Warning as e:
warning_text = str(e) warning_text = str(e)
pytest.fail(f"Unexpected warning: {warning_text}") pytest.fail('Unexpected warning: {}'.format(warning_text))
def test_no_tables_found_warnings_suppressed(): def test_no_tables_found_warnings_suppressed():
filename = os.path.join(testdir, "empty.pdf") filename = os.path.join(testdir, 'blank.pdf')
with warnings.catch_warnings(): with warnings.catch_warnings():
# the test should fail if any warning is thrown # the test should fail if any warning is thrown
warnings.simplefilter("error") warnings.simplefilter('error')
try: try:
tables = camelot.read_pdf(filename, suppress_stdout=True) tables = camelot.read_pdf(filename, suppress_stdout=True)
except Warning as e: except Warning as e:
warning_text = str(e) warning_text = str(e)
pytest.fail(f"Unexpected warning: {warning_text}") pytest.fail('Unexpected warning: {}'.format(warning_text))
def test_ghostscript_not_found(monkeypatch):
import distutils
def _find_executable_patch(arg):
return ''
monkeypatch.setattr(distutils.spawn, 'find_executable', _find_executable_patch)
message = ('Please make sure that Ghostscript is installed and available'
' on the PATH environment variable')
filename = os.path.join(testdir, 'foo.pdf')
with pytest.raises(Exception, message=message):
tables = camelot.read_pdf(filename)
def test_no_password(): def test_no_password():
filename = os.path.join(testdir, "health_protected.pdf") filename = os.path.join(testdir, 'health_protected.pdf')
message = "file has not been decrypted" message = 'file has not been decrypted'
with pytest.raises(Exception, match=message): with pytest.raises(Exception, message=message):
tables = camelot.read_pdf(filename) tables = camelot.read_pdf(filename)
def test_bad_password(): def test_bad_password():
filename = os.path.join(testdir, "health_protected.pdf") filename = os.path.join(testdir, 'health_protected.pdf')
message = "file has not been decrypted" message = 'file has not been decrypted'
with pytest.raises(Exception, match=message): with pytest.raises(Exception, message=message):
tables = camelot.read_pdf(filename, password="wrongpass") tables = camelot.read_pdf(filename, password='wrongpass')
def test_stream_equal_length():
message = "Length of table_areas and columns" " should be equal"
with pytest.raises(ValueError, match=message):
tables = camelot.read_pdf(
filename,
flavor="stream",
table_areas=["10,20,30,40"],
columns=["10,20,30,40", "10,20,30,40"],
)
def test_image_warning():
filename = os.path.join(testdir, "image.pdf")
with warnings.catch_warnings():
warnings.simplefilter("error", category=UserWarning)
with pytest.raises(UserWarning) as e:
tables = camelot.read_pdf(filename)
assert (
str(e.value)
== "page-1 is image-based, camelot only works on text-based pages."
)
def test_stream_no_tables_on_page():
filename = os.path.join(testdir, "empty.pdf")
with warnings.catch_warnings():
warnings.simplefilter("error")
with pytest.raises(UserWarning) as e:
tables = camelot.read_pdf(filename, flavor="stream")
assert str(e.value) == "No tables found on page-1"
def test_stream_no_tables_in_area():
filename = os.path.join(testdir, "only_page_number.pdf")
with warnings.catch_warnings():
warnings.simplefilter("error")
with pytest.raises(UserWarning) as e:
tables = camelot.read_pdf(filename, flavor="stream")
assert str(e.value) == "No tables found in table area 1"
def test_lattice_no_tables_on_page():
filename = os.path.join(testdir, "empty.pdf")
with warnings.catch_warnings():
warnings.simplefilter("error", category=UserWarning)
with pytest.raises(UserWarning) as e:
tables = camelot.read_pdf(filename, flavor="lattice")
assert str(e.value) == "No tables found on page-1"
def test_lattice_unknown_backend():
message = "Unknown backend 'mupdf' specified. Please use either 'poppler' or 'ghostscript'."
with pytest.raises(NotImplementedError, match=message):
tables = camelot.read_pdf(filename, backend="mupdf")
def test_lattice_no_convert_method():
class ConversionBackend(object):
pass
message = "must implement a 'convert' method"
with pytest.raises(NotImplementedError, match=message):
tables = camelot.read_pdf(filename, backend=ConversionBackend())
def test_lattice_ghostscript_deprecation_warning():
ghostscript_deprecation_warning = (
"'ghostscript' will be replaced by 'poppler' as the default image conversion"
" backend in v0.12.0. You can try out 'poppler' with backend='poppler'."
)
with warnings.catch_warnings():
warnings.simplefilter("error")
with pytest.raises(DeprecationWarning) as e:
tables = camelot.read_pdf(filename)
assert str(e.value) == ghostscript_deprecation_warning

View File

@ -1,60 +0,0 @@
# -*- coding: utf-8 -*-
import pytest
import camelot.backends.image_conversion
from camelot.backends import ImageConversionBackend
class PopplerBackendError(object):
def convert(self, pdf_path, png_path):
raise ValueError("Image conversion failed")
class GhostscriptBackendError(object):
def convert(self, pdf_path, png_path):
raise ValueError("Image conversion failed")
class GhostscriptBackendNoError(object):
def convert(self, pdf_path, png_path):
pass
def test_poppler_backend_error_when_no_use_fallback(monkeypatch):
BACKENDS = {
"poppler": PopplerBackendError,
"ghostscript": GhostscriptBackendNoError,
}
monkeypatch.setattr(
"camelot.backends.image_conversion.BACKENDS", BACKENDS, raising=True
)
backend = ImageConversionBackend(use_fallback=False)
message = "Image conversion failed with image conversion backend 'poppler'"
with pytest.raises(ValueError, match=message):
backend.convert("foo", "bar")
def test_ghostscript_backend_when_use_fallback(monkeypatch):
BACKENDS = {
"poppler": PopplerBackendError,
"ghostscript": GhostscriptBackendNoError,
}
monkeypatch.setattr(
"camelot.backends.image_conversion.BACKENDS", BACKENDS, raising=True
)
backend = ImageConversionBackend()
backend.convert("foo", "bar")
def test_ghostscript_backend_error_when_use_fallback(monkeypatch):
BACKENDS = {"poppler": PopplerBackendError, "ghostscript": GhostscriptBackendError}
monkeypatch.setattr(
"camelot.backends.image_conversion.BACKENDS", BACKENDS, raising=True
)
backend = ImageConversionBackend()
message = "Image conversion failed with image conversion backend 'ghostscript'"
with pytest.raises(ValueError, match=message):
backend.convert("foo", "bar")

View File

@ -1,120 +0,0 @@
# -*- coding: utf-8 -*-
import os
import sys
import pytest
import pandas as pd
from pandas.testing import assert_frame_equal
import camelot
from camelot.core import Table, TableList
from camelot.__version__ import generate_version
from .data import *
testdir = os.path.dirname(os.path.abspath(__file__))
testdir = os.path.join(testdir, "files")
skip_on_windows = pytest.mark.skipif(
sys.platform.startswith("win"),
reason="Ghostscript not installed in Windows test environment",
)
@skip_on_windows
def test_lattice():
df = pd.DataFrame(data_lattice)
filename = os.path.join(
testdir, "tabula/icdar2013-dataset/competition-dataset-us/us-030.pdf"
)
tables = camelot.read_pdf(filename, pages="2")
assert_frame_equal(df, tables[0].df)
@skip_on_windows
def test_lattice_table_rotated():
df = pd.DataFrame(data_lattice_table_rotated)
filename = os.path.join(testdir, "clockwise_table_1.pdf")
tables = camelot.read_pdf(filename)
assert_frame_equal(df, tables[0].df)
filename = os.path.join(testdir, "anticlockwise_table_1.pdf")
tables = camelot.read_pdf(filename)
assert_frame_equal(df, tables[0].df)
@skip_on_windows
def test_lattice_two_tables():
df1 = pd.DataFrame(data_lattice_two_tables_1)
df2 = pd.DataFrame(data_lattice_two_tables_2)
filename = os.path.join(testdir, "twotables_2.pdf")
tables = camelot.read_pdf(filename)
assert len(tables) == 2
assert df1.equals(tables[0].df)
assert df2.equals(tables[1].df)
@skip_on_windows
def test_lattice_table_regions():
df = pd.DataFrame(data_lattice_table_regions)
filename = os.path.join(testdir, "table_region.pdf")
tables = camelot.read_pdf(filename, table_regions=["170,370,560,270"])
assert_frame_equal(df, tables[0].df)
@skip_on_windows
def test_lattice_table_areas():
df = pd.DataFrame(data_lattice_table_areas)
filename = os.path.join(testdir, "twotables_2.pdf")
tables = camelot.read_pdf(filename, table_areas=["80,693,535,448"])
assert_frame_equal(df, tables[0].df)
@skip_on_windows
def test_lattice_process_background():
df = pd.DataFrame(data_lattice_process_background)
filename = os.path.join(testdir, "background_lines_1.pdf")
tables = camelot.read_pdf(filename, process_background=True)
assert_frame_equal(df, tables[1].df)
@skip_on_windows
def test_lattice_copy_text():
df = pd.DataFrame(data_lattice_copy_text)
filename = os.path.join(testdir, "row_span_1.pdf")
tables = camelot.read_pdf(filename, line_scale=60, copy_text="v")
assert_frame_equal(df, tables[0].df)
@skip_on_windows
def test_lattice_shift_text():
df_lt = pd.DataFrame(data_lattice_shift_text_left_top)
df_disable = pd.DataFrame(data_lattice_shift_text_disable)
df_rb = pd.DataFrame(data_lattice_shift_text_right_bottom)
filename = os.path.join(testdir, "column_span_2.pdf")
tables = camelot.read_pdf(filename, line_scale=40)
assert df_lt.equals(tables[0].df)
tables = camelot.read_pdf(filename, line_scale=40, shift_text=[""])
assert df_disable.equals(tables[0].df)
tables = camelot.read_pdf(filename, line_scale=40, shift_text=["r", "b"])
assert df_rb.equals(tables[0].df)
@skip_on_windows
def test_lattice_arabic():
df = pd.DataFrame(data_arabic)
filename = os.path.join(testdir, "tabula/arabic.pdf")
tables = camelot.read_pdf(filename)
assert_frame_equal(df, tables[0].df)

View File

@ -1,98 +1,67 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
import sys
import pytest import pytest
import camelot import camelot
testdir = os.path.dirname(os.path.abspath(__file__)) testdir = os.path.dirname(os.path.abspath(__file__))
testdir = os.path.join(testdir, "files") testdir = os.path.join(testdir, "files")
skip_on_windows = pytest.mark.skipif(
sys.platform.startswith("win"),
reason="Ghostscript not installed in Windows test environment",
)
@pytest.mark.mpl_image_compare(
@skip_on_windows baseline_dir="files/baseline_plots", remove_text=True)
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_text_plot(): def test_text_plot():
filename = os.path.join(testdir, "foo.pdf") filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename) tables = camelot.read_pdf(filename)
return camelot.plot(tables[0], kind="text") return camelot.plot(tables[0], kind='text')
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True) @pytest.mark.mpl_image_compare(
def test_textedge_plot(): baseline_dir="files/baseline_plots", remove_text=True)
filename = os.path.join(testdir, "tabula/12s0324.pdf") def test_grid_plot():
tables = camelot.read_pdf(filename, flavor="stream")
return camelot.plot(tables[0], kind="textedge")
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_lattice_contour_plot_poppler():
filename = os.path.join(testdir, "foo.pdf") filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="poppler") tables = camelot.read_pdf(filename)
return camelot.plot(tables[0], kind="contour") return camelot.plot(tables[0], kind='grid')
@skip_on_windows @pytest.mark.mpl_image_compare(
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True) baseline_dir="files/baseline_plots", remove_text=True)
def test_lattice_contour_plot_ghostscript(): def test_lattice_contour_plot():
filename = os.path.join(testdir, "foo.pdf") filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="ghostscript") tables = camelot.read_pdf(filename)
return camelot.plot(tables[0], kind="contour") return camelot.plot(tables[0], kind='contour')
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True) @pytest.mark.mpl_image_compare(
baseline_dir="files/baseline_plots", remove_text=True)
def test_stream_contour_plot(): def test_stream_contour_plot():
filename = os.path.join(testdir, "tabula/12s0324.pdf") filename = os.path.join(testdir, "tabula/12s0324.pdf")
tables = camelot.read_pdf(filename, flavor="stream") tables = camelot.read_pdf(filename, flavor='stream')
return camelot.plot(tables[0], kind="contour") return camelot.plot(tables[0], kind='contour')
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True) @pytest.mark.mpl_image_compare(
def test_line_plot_poppler(): baseline_dir="files/baseline_plots", remove_text=True)
def test_line_plot():
filename = os.path.join(testdir, "foo.pdf") filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="poppler") tables = camelot.read_pdf(filename)
return camelot.plot(tables[0], kind="line") return camelot.plot(tables[0], kind='line')
@skip_on_windows @pytest.mark.mpl_image_compare(
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True) baseline_dir="files/baseline_plots", remove_text=True)
def test_line_plot_ghostscript(): def test_joint_plot():
filename = os.path.join(testdir, "foo.pdf") filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="ghostscript") tables = camelot.read_pdf(filename)
return camelot.plot(tables[0], kind="line") return camelot.plot(tables[0], kind='joint')
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True) @pytest.mark.mpl_image_compare(
def test_joint_plot_poppler(): baseline_dir="files/baseline_plots", remove_text=True)
filename = os.path.join(testdir, "foo.pdf") def test_textedge_plot():
tables = camelot.read_pdf(filename, backend="poppler") filename = os.path.join(testdir, "tabula/12s0324.pdf")
return camelot.plot(tables[0], kind="joint") tables = camelot.read_pdf(filename, flavor='stream')
return camelot.plot(tables[0], kind='textedge')
@skip_on_windows
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_joint_plot_ghostscript():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="ghostscript")
return camelot.plot(tables[0], kind="joint")
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_grid_plot_poppler():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="poppler")
return camelot.plot(tables[0], kind="grid")
@skip_on_windows
@pytest.mark.mpl_image_compare(baseline_dir="files/baseline_plots", remove_text=True)
def test_grid_plot_ghostscript():
filename = os.path.join(testdir, "foo.pdf")
tables = camelot.read_pdf(filename, backend="ghostscript")
return camelot.plot(tables[0], kind="grid")

View File

@ -1,133 +0,0 @@
# -*- coding: utf-8 -*-
import os
import pytest
import pandas as pd
from pandas.testing import assert_frame_equal
import camelot
from camelot.core import Table, TableList
from camelot.__version__ import generate_version
from .data import *
testdir = os.path.dirname(os.path.abspath(__file__))
testdir = os.path.join(testdir, "files")
def test_stream():
df = pd.DataFrame(data_stream)
filename = os.path.join(testdir, "health.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
assert_frame_equal(df, tables[0].df)
def test_stream_table_rotated():
df = pd.DataFrame(data_stream_table_rotated)
filename = os.path.join(testdir, "clockwise_table_2.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
assert_frame_equal(df, tables[0].df)
filename = os.path.join(testdir, "anticlockwise_table_2.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
assert_frame_equal(df, tables[0].df)
def test_stream_two_tables():
df1 = pd.DataFrame(data_stream_two_tables_1)
df2 = pd.DataFrame(data_stream_two_tables_2)
filename = os.path.join(testdir, "tabula/12s0324.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
assert len(tables) == 2
assert df1.equals(tables[0].df)
assert df2.equals(tables[1].df)
def test_stream_table_regions():
df = pd.DataFrame(data_stream_table_areas)
filename = os.path.join(testdir, "tabula/us-007.pdf")
tables = camelot.read_pdf(
filename, flavor="stream", table_regions=["320,460,573,335"]
)
assert_frame_equal(df, tables[0].df)
def test_stream_table_areas():
df = pd.DataFrame(data_stream_table_areas)
filename = os.path.join(testdir, "tabula/us-007.pdf")
tables = camelot.read_pdf(
filename, flavor="stream", table_areas=["320,500,573,335"]
)
assert_frame_equal(df, tables[0].df)
def test_stream_columns():
df = pd.DataFrame(data_stream_columns)
filename = os.path.join(testdir, "mexican_towns.pdf")
tables = camelot.read_pdf(
filename, flavor="stream", columns=["67,180,230,425,475"], row_tol=10
)
assert_frame_equal(df, tables[0].df)
def test_stream_split_text():
df = pd.DataFrame(data_stream_split_text)
filename = os.path.join(testdir, "tabula/m27.pdf")
tables = camelot.read_pdf(
filename,
flavor="stream",
columns=["72,95,209,327,442,529,566,606,683"],
split_text=True,
)
assert_frame_equal(df, tables[0].df)
def test_stream_flag_size():
df = pd.DataFrame(data_stream_flag_size)
filename = os.path.join(testdir, "superscript.pdf")
tables = camelot.read_pdf(filename, flavor="stream", flag_size=True)
assert_frame_equal(df, tables[0].df)
def test_stream_strip_text():
df = pd.DataFrame(data_stream_strip_text)
filename = os.path.join(testdir, "detect_vertical_false.pdf")
tables = camelot.read_pdf(filename, flavor="stream", strip_text=" ,\n")
assert_frame_equal(df, tables[0].df)
def test_stream_edge_tol():
df = pd.DataFrame(data_stream_edge_tol)
filename = os.path.join(testdir, "edge_tol.pdf")
tables = camelot.read_pdf(filename, flavor="stream", edge_tol=500)
assert_frame_equal(df, tables[0].df)
def test_stream_layout_kwargs():
df = pd.DataFrame(data_stream_layout_kwargs)
filename = os.path.join(testdir, "detect_vertical_false.pdf")
tables = camelot.read_pdf(
filename, flavor="stream", layout_kwargs={"detect_vertical": False}
)
assert_frame_equal(df, tables[0].df)
def test_stream_duplicated_text():
df = pd.DataFrame(data_stream_duplicated_text)
filename = os.path.join(testdir, "birdisland.pdf")
tables = camelot.read_pdf(filename, flavor="stream")
assert_frame_equal(df, tables[0].df)