TensorFlow

From The World according to Vissie
Jump to navigation Jump to search

Setup

setup Tensorflow in Venv

https://linuxize.com/post/how-to-install-tensorflow-on-debian-9/
mkdir tensorflow
cd ./tensorflow/
python3 -m venv venv
source ./venv/bin/activate
pip install --upgrade pip

My starting point will be:

https://heartbeat.fritz.ai/detecting-objects-in-videos-and-camera-feeds-using-keras-opencv-and-imageai-c869fe1ebcdb

Look at the following like to show/alter live video:

https://dzone.com/articles/object-detection-tutorial-in-tensorflow-real-time

Retrain models

My source

https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10

Steps

Configure PYTHONPATH environment variable

cd ~/tensorflow/tensorflow1/models
export PYTHONPATH=`pwd`:`pwd`/research:`pwd`/research/slim

Gather and Label Pictures

~/src/labelImg.py

Generate Training Data

cd ~/tensorflow/tensorflow1/models/research/object_detection
python3.7 ./xml_to_csv.py
ls /home/vissie/tensorflow/tensorflow1/models/research/object_detection/images

Set labels to be used

vim ./generate_tfrecord.py

python3.7 generate_tfrecord.py --csv_input=images/train_labels.csv --image_dir=images/train --output_path=train.record
python3.7 generate_tfrecord.py --csv_input=images/test_labels.csv --image_dir=images/test --output_path=test.record

Create Label Map and Configure Training

vim ./training/labelmap.pbtxt

vim: ./training/labelmap.pbtxt
item {
  id: 1
  name: 'basketball'
}

item {
  id: 2
  name: 'shirt'
}

item {
  id: 3
  name: 'shoe'
}

Configure training

cp ./samples/configs/faster_rcnn_inception_v2_pets.config ./training/.
vim ./training/samples/configs/faster_rcnn_inception_v2_pets.config
Line 9. Change num_classes to the number of different objects you want the classifier to detect. For the above basketball, shirt, and shoe detector, it would be num_classes : 3 .
Line 106. Change fine_tune_checkpoint to:
fine_tune_checkpoint : "C:/tensorflow1/models/research/object_detection/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
 # Note: The below line limits the training process to 200K steps, which we
 # empirically found to be sufficient enough to train the pets dataset. This
 # effectively bypasses the learning rate schedule (the learning rate will
 # never decay). Remove the below line to train indefinitely.
 num_steps: 200000
Lines 123 and 125. In the train_input_reader section, change input_path and label_map_path to:
input_path : "C:/tensorflow1/models/research/object_detection/train.record"
label_map_path: "C:/tensorflow1/models/research/object_detection/training/labelmap.pbtxt"
Line 130. Change num_examples to the number of images you have in the \images\test directory.
Lines 135 and 137. In the eval_input_reader section, change input_path and label_map_path to:
input_path : "C:/tensorflow1/models/research/object_detection/test.record"
label_map_path: "C:/tensorflow1/models/research/object_detection/training/labelmap.pbtxt"

Run the training

python3.7 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config


Export Inference Graph

python3.7 export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-XXXX --output_directory inference_graph

Use Your Newly Trained Object Detection Classifier!

Before running the Python scripts, you need to modify the NUM_CLASSES variable in the script to equal the number of classes you want to detect. (For my Pinochle Card Detector, there are six cards I want to detect, so NUM_CLASSES = 6.)


Train using yolov3

source

https://pylessons.com/YOLOv3-introduction/

Downloading Dataset

python main.py downloader --classes 'Fire hydrant' 'Traffic light' Car Bus --type_csv train --limit 400

Converting label files to XML

python3.7 ./oid_to_pascal_voc_xml.py

Converting XML to YOLO v3 file structure

python3.7 voc_to_YOLOv3.py
This will create 4_CLASS_test_classes.txt and 4_CLASS_test.txt.

Update cfg file

https://github.com/ultralytics/yolov3/wiki/Train-Custom-Data
vim ./model_data/yolov3.cfg
Each YOLO layer has 255 outputs: 85 outputs per anchor [4 box coordinates + 1 object confidence + 80 class confidences], times 3 anchors. 
If you use fewer classes, you can reduce this to [4 + 1 + n] * 3 = 15 + 3*n outputs, where n is your class count. This modification 
should be made to the output filter preceding each of the 3 YOLO layers. Also modify classes=80 to classes=n in each YOLO layer, 
where n is your class count.
change line batch to `batch=64`
change line `subdivisions` to `subdivisions=8` (if training fails after it, try doubling it)

Training

I had to add downgrade keras:
sudo pip3.7 install keras==2.2.4
python3.7 train_bottleneck.py

Old stuff

Maybe I'll start here...

https://medium.com/datadriveninvestor/building-an-image-classifier-using-tensorflow-3ac9ccc92e7c

and then...

https://www.tensorflow.org/hub/tutorials/image_retraining
https://www.datacamp.com/community/tutorials/tensorflow-tutorial
https://medium.com/@RaghavPrabhu/a-simple-tutorial-to-classify-images-using-tensorflow-step-by-step-guide-7e0fad26c22

define image and object

https://github.com/tzutalin/labelImg


Training steps

My first

Based on: https://ersanpreet.wordpress.com/2018/08/18/creating-test-record-and-train-record-custom-object-detection-part-4/

Some install steps:

cd <somewhere>/models/research
~/Downloads/protoc-3.9.2-linux-x86_64/bin/protoc object_detection/protos/*.proto --python_out=.

test it:

python3.7 ./object_detection/builders/model_builder_test.py
./xml_to_csv.py 
./generate_tfrecord.py --csv_input=data/train_labels.csv  --output_path=data/train.record --image_dir=./images/train
./generate_tfrecord.py --csv_input=data/test_labels.csv  --output_path=data/test.record --image_dir=./images/test
python3.7 ../models/research/object_detection/legacy/train.py -logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config

Sadly, I had to copy my data, training, images and ssdxxxxxx folders to models/research folder. I'm not keen on this.

python3.7 ./object_detection/legacy/train.py -logtostderr --train_dir=./training --pipeline_config_path=./training/ssd_mobilenet_v1_pets.config


If you are running out of memory and this is causing training to fail, there are a number of solutions you can try. First try adding the arguments

batch_queue_capacity: 2
prefetch_queue_capacity: 2

to your config file in the train_config section. For example, placing the two lines between gradient_clipping_by_norm and fine_tune_checkpoint will work. The number 2 above should only be starting values to get training to begin. The default for those values are 8 and 10 respectively and increasing those values should help speed up training.

crt-c. Then convert your checkpoint to pd

python3.7 ./object_detection/export_inference_graph.py --input_type image_tensor --pipeline_config_path ./training/ssd_mobilenet_v1_pets.config --trained_checkpoint_prefix ./training/model.ckpt-206 --output_directory vis_obj_detection_graph