diff --git a/.github/workflows/bandit.yml b/.github/workflows/bandit.yml new file mode 100644 index 0000000..c3f0557 --- /dev/null +++ b/.github/workflows/bandit.yml @@ -0,0 +1,51 @@ +# This workflow uses actions that are not certified by GitHub. +# They are provided by a third-party and are governed by +# separate terms of service, privacy policy, and support +# documentation. + +# Bandit is a security linter designed to find common security issues in Python code. +# This action will run Bandit on your codebase. +# The results of the scan will be found under the Security tab of your repository. + +# https://github.com/marketplace/actions/bandit-scan is ISC licensed, by abirismyname +# https://pypi.org/project/bandit/ is Apache v2.0 licensed, by PyCQA + +name: Bandit +on: + push: + branches: ["master"] + pull_request: + # The branches below must be a subset of the branches above + branches: ["master"] + schedule: + - cron: "0 10 * * 5" + +jobs: + bandit: + permissions: + contents: read # for actions/checkout to fetch code + security-events: write # for github/codeql-action/upload-sarif to upload SARIF results + actions: read # only required for a private repository by github/codeql-action/upload-sarif to get the Action run status + + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - name: Bandit Scan + uses: shundor/python-bandit-scan@9cc5aa4a006482b8a7f91134412df6772dbda22c + with: # optional arguments + # exit with 0, even with results found + exit_zero: true # optional, default is DEFAULT + # Github token of the repository (automatically created by Github) + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information. + # File or directory to run bandit on + # path: # optional, default is . + # Report only issues of a given severity level or higher. Can be LOW, MEDIUM or HIGH. Default is UNDEFINED (everything) + # level: # optional, default is UNDEFINED + # Report only issues of a given confidence level or higher. Can be LOW, MEDIUM or HIGH. Default is UNDEFINED (everything) + # confidence: # optional, default is UNDEFINED + # comma-separated list of paths (glob patterns supported) to exclude from scan (note that these are in addition to the excluded paths provided in the config file) (default: .svn,CVS,.bzr,.hg,.git,__pycache__,.tox,.eggs,*.egg) + # excluded_paths: # optional, default is DEFAULT + # comma-separated list of test IDs to skip + # skips: # optional, default is DEFAULT + # path to a .bandit file that supplies command line arguments + # ini_path: # optional, default is DEFAULT diff --git a/.github/workflows/greetings.yml b/.github/workflows/greetings.yml index 6d163fd..fb97858 100644 --- a/.github/workflows/greetings.yml +++ b/.github/workflows/greetings.yml @@ -9,8 +9,8 @@ jobs: issues: write pull-requests: write steps: - - uses: actions/first-interaction@v1 - with: - repo-token: ${{ secrets.GITHUB_TOKEN }} - issue-message: 'Thanks for contributing this issue! We will be replying soon.' - pr-message: 'Thanks for contributing this PR! We will validade soon.' + - uses: actions/first-interaction@v1 + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + issue-message: "Thanks for contributing this issue! We will be replying soon." + pr-message: "Thanks for contributing this PR! We will validade soon." diff --git a/README.md b/README.md index 411eaa8..a551a20 100644 --- a/README.md +++ b/README.md @@ -3,19 +3,22 @@ [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlvaroCavalcante/auto_annotate/blob/master/assets/auto_annotate_example.ipynb) # Auto Annotation Tool for TensorFlow Object Detection + Are you tired to label your images by hand when working with object detection? Have hundreds or thousands of images to label? Then this project will make your life easier, just create some annotations and let the machine do the rest for you! # Contents + - [How it works](#how) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - - [Command line](#command-line) - - [Code](#code) - - [Google Colab](#colab) + - [Command line](#command-line) + - [Code](#code) + - [Google Colab](#colab) - [Contribute](#contribute) # 🤔 How it works + This auto annotation tool is based on the idea of a semi-supervised architecture, where a model trained with a small amount of labeled data is used to produce the new labels for the rest of the dataset. As simple as that, the library uses an initial and simplified object detection model to generate the XML files with the image annotations (considering the PASCAL VOC format). @@ -24,6 +27,7 @@ Besides that, it's possible to define a confidence threshold for the detector, a If you want to know more technical details about the project, please, refer to my [Medium article](https://medium.com/p/acf410a600b8#9e0e-aaa30a9f4b7a). # 📝 Prerequisites + To use this library you will need a pre-trained object detection model with a subsample of your dataset. As a semi-supervised solution, it's impossible to avoid manual annotation, but you'll need to label just a small amount of your data. It's hard to determine the number of images to label manually, once it depends on the complexity of your problem. If you want to detect dogs and cats and have 2000 images in your dataset, for example, probably 200 images are enough (100 per class). On the other hand, if you have dozens of classes or objects that are hard to detect, you should need more manual annotations to see the benefits of the semi-supervised approach. @@ -31,15 +35,19 @@ It's hard to determine the number of images to label manually, once it depends o After training this initial model, export your best checkpoint to the [SavedModel](https://www.tensorflow.org/guide/saved_model) format and you'll be ready to use the auto annotation tool! # 💾 Installation -It's recommended to use a Python [virtual environment](https://docs.python.org/3/library/venv.html) to avoid any compatibility issue with your TensorFlow version. + +It's recommended to use a Python [virtual environment](https://docs.python.org/3/library/venv.html) to avoid any compatibility issue with your TensorFlow version. In your environment, you can install the project using pip: -``` -$ pip install auto-annotate + +```bash +pip install auto-annotate ``` -# 👨‍🔬 Usage +# 👨‍🔬 Usage + You can use this tool either from the command line or directly in your Python code. For both, you'll have the same set of parameters: + - saved_model_path: The path of the **saved_model** folder with the initial model. - label_map_path: The path of the **label_map.pbtxt** file. - imgs_path: The path of the folder with your dataset images to label. @@ -47,16 +55,21 @@ You can use this tool either from the command line or directly in your Python co - threshold: Confidence threshold to accept the detections made by the model. the defaults is 0.5. ## Command line + To use this tool from the command line, you just need to run: -``` + +```bash python -m auto_annotate --label_map_path /example/label_map.pbtxt \ --saved_model_path /example/saved_model \ --imgs_path /example/dataset_images \ --xml_path /example/dataset_labels \ --threshold 0.65 ``` + ## Code + To use this tool from your Python code, check the following code snippet: + ```python from auto_annotate import AutoAnnotate @@ -71,12 +84,13 @@ ann_tool.generate_annotations() ``` ## Google Colab + For a complete working example, you can refer to this [Google Colab Notebook](https://colab.research.google.com/drive/14qgA9IUYCVAALJmJabvQ9sDxrKxEezwP?usp=sharing), where I share the details about installlation and setup. # 🤝 Contribute + Contributions are welcome! Feel free to open a new issue if you have any problem to use the library of find a bug! You can also use the [discussions](https://github.com/AlvaroCavalcante/auto_annotate/discussions) section to suggest improvements and ask questions! If you find this library useful, don't forget to give it a :star: or support the project: Buy Me a Coffee at ko-fi.com - diff --git a/setup.py b/setup.py index 5afa510..031c30a 100644 --- a/setup.py +++ b/setup.py @@ -57,10 +57,10 @@ def read(file_name): python_requires='>=3.8', install_requires=[ - 'numpy==1.22.4', - 'tensorflow==2.11.0', - 'Pillow==9.3.0', - 'tqdm==4.64.1', + 'numpy==1.26.3', + 'tensorflow==2.11.1', + 'tqdm==4.66.1', + 'Pillow==10.0.1', 'six==1.16.0' ] )