Skip to content
ENDIGE BOATING

ENDIGE BOATING

Boats, boat electronics and software

  • Home
  • My boat
  • My electronics and software
  • Products and Downloads
  • English
    • Suomi
  • Home
  • Testing Artificial Intelligence (deep neural networks) for maritime collision avoidance

Testing Artificial Intelligence (deep neural networks) for maritime collision avoidance

Posted on December 26, 2020December 26, 2020 By Jaykay 1 Comment on Testing Artificial Intelligence (deep neural networks) for maritime collision avoidance
Sensors, cameras and IoT, Boating electronics and hardware, Boating software, Commercial marine equipment and products
image 10

Inspired by OSCAR Navigation collision avoidance system, I wanted to test how an off-the-shelf Artificial Intelligence (AI), specifically deep neural network based object labeling system, would work on maritime collision avoidance application. AI would be responsible for the labeling of the objects and marking their location in the image. If it can be done with good consistency and with relatively short interval and if the camera is situated in a fixed position and has a gimbal or digital stabilization, it should be possible to estimate the speed and heading of the moving objects found in the image (using the speed and heading data from the own instruments). It should also be possible to determine the distance of the object from own ship.

I decided to use Amazon Rekognition for the tests, because it offers good off-the-shelf tools for labeling and locating objects. Amazon also offers a possibility to train custom labels with a relative small set of training data, but I wanted to see how the product would work without any additional training.

I did the tests using still photos showing a boat in the distance. Due to the fact that it is currently Winter, I could not take the photos myself, but had to use photos found in the internet. I will try to repeat the test with a more systematic collection of photos in the Summer.

Artificial neural networks (ANNs) never tell what is in the image, but rather show the probabilities of different objects appearing in that image. I have provided a sorted list of top objects in this image determined by Amazon under the image.

In addition to the probabilities, Amazon Rekognition also provides a location (a bounding box) for the object found on the image, if it can determine it with high enough probability.

image 11
Vessel 99.7 %, Transportation 99.7 %, Watercraft 99.7 %, Vehicle 99.7 %,
Boat 98.7 %, Sailboat 96.3 %

For those interested, I have also provided the response JSON for this image in the end of the post.

Here is another image with a sailboat.

image 12
Boat 98.5 %, Vehicle 98.5 %, Transportation 98.5 %, Sailboat 95.4 %,
Watercraft 82.5 %, Vessel 82.5 %

Sailboat is clearly an easy target to find. What about other types of boats.

Boat detected with 96% probability below and also location found.

image 17

Swans and the boat detected in the image below.

image 15

Vessel with 97% probability below, but no location determined.

image 18

But, when cropped just a bit, vessel still found with 97% probability but now also location determined.

image 19

In the following, there is also one false positive as the one in the center is not a boat. Also, the other one is a ship, not a boat, but finding a vessel is good anyway.

image 13
Watercraft 99.5 %, Transportation 99.5 %, Vessel 99.5 %, Vehicle 99.5 %,
Nature 92.8 %, Boat 92 %

No boat located in the following image, but ANN determined that the image has a vehicle with 70% probability and a ship with 65% probability. The boat is only 11×7 pixels in this photo.

image 14

By lowering the image resolution in small steps, it can be determined that if the boat is visible in the image and its size in one direction is at least 35 pixels (e.g. 35 x 20 pixels), it will be “seen” and typically also its location can be determined. Also, splitting or cropping the image makes at least locating the object work better. The recognition works amazingly well using the generic neural network. It would also be interesting to see how much it could still be improved by custom labeling (using training data from the same camera capturing the images to be evaluated).

Amazon Rekognition response JSON for the first image on this post:

{
    "Labels": [
        {
            "Name": "Vessel",
            "Confidence": 99.7686538696289,
            "Instances": [],
            "Parents": [
                {
                    "Name": "Vehicle"
                },
                {
                    "Name": "Transportation"
                }
            ]
        },
        {
            "Name": "Transportation",
            "Confidence": 99.7686538696289,
            "Instances": [],
            "Parents": []
        },
        {
            "Name": "Watercraft",
            "Confidence": 99.7686538696289,
            "Instances": [],
            "Parents": [
                {
                    "Name": "Vehicle"
                },
                {
                    "Name": "Transportation"
                }
            ]
        },
        {
            "Name": "Vehicle",
            "Confidence": 99.7686538696289,
            "Instances": [],
            "Parents": [
                {
                    "Name": "Transportation"
                }
            ]
        },
        {
            "Name": "Boat",
            "Confidence": 98.71965789794922,
            "Instances": [
                {
                    "BoundingBox": {
                        "Width": 0.05666980892419815,
                        "Height": 0.11389521509408951,
                        "Left": 0.10582049936056137,
                        "Top": 0.6451812386512756
                    },
                    "Confidence": 98.71965789794922
                }
            ],
            "Parents": [
                {
                    "Name": "Vehicle"
                },
                {
                    "Name": "Transportation"
                }
            ]
        },
        {
            "Name": "Sailboat",
            "Confidence": 96.3071517944336,
            "Instances": [],
            "Parents": [
                {
                    "Name": "Boat"
                },
                {
                    "Name": "Vehicle"
                },
                {
                    "Name": "Transportation"
                }
            ]
        }
    ],
    "LabelModelVersion": "2.0"
}

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • WhatsApp
  • Reddit

Related

Tags: AI collision avoidance ML neural networks

Post navigation

❮ Previous Post: Nautical chart comparison: mobile apps, oeSENC and paper charts
Next Post: NMEA 2000 PGN’s deciphered ❯

One thought on “Testing Artificial Intelligence (deep neural networks) for maritime collision avoidance”

  1. Patrick Haebig says:
    March 1, 2021 at 12:41

    Good job!

    Reply

Leave a ReplyCancel reply

Cart

Categories

  • Boats general
  • Boating software
    • OpenCPN
    • Mobile apps
  • Boating electronics and hardware
    • Sensors, cameras and IoT
    • Displays
    • Control devices
    • 3D printing
  • Commercial marine equipment and products
  • Nautical charts
  • Finland-specific

Subscribe to posts



Search

Tags

AI Antares 30 Antares 980 aspect ratio boat brands boat hooks boating app buoy hook chart plotter code collision avoidance diesel DIY finnish fuel fuel station gpx guest harbors hack high brightness instructions islands J1939 kimppavene Merry Fisher 10 Merry Fisher 925 ML monitor navionics neural networks nits Nord Star 31 o-charts PGN pod drive pumpout remote control review s-57 screen shortcuts specification sunlight readable waypoints wifi
  • About this blog
  • Contact
  • RSS
  • Privacy Policy
  • Suomi

Copyright © 2025 ENDIGE BOATING.

Theme: Oceanly by ScriptsTown