
[ad_1]
In this article, we will look at detecting and estimating the orientation of ArUco markers using Python and OpenCV.
First, we’ll start by generating the ArUco marker. You can use a specific website to generate single tags one by one or use my program to generate entire pages of markers.
Once you’ve made some markers, I’ll show you another program to trace them. The program can also determine the currency axes of these ArUco markers.
The code I wrote here is in Python, but for more advanced detection and currency estimation applications, you might want to look at C++ for their implementation.
What you see below is the result of my curiosity and willingness to experiment with image processing. Any constructive comments are welcome.
If you want to make ArUco markers one by one, you can use This website by Oleg Kalachev, This website provides a simple interface, as you can see below:

Select ArUco dictionary, marker’s ID and size. Then download the marker and take a print out of it.
I wanted to print the whole page with ArUco markers, so this website was not helpful to me.
Now, OpenCV provides two functions called cv.aruco.GridBoard.create()
And cv.aruco.GridBoard.draw()
To do this — create an entire page of ArUco markers and then draw. You can read more about those functions Here,
Still, I was unhappy with the result when trying out the functions. ArUco markers were sized incorrectly when printed, some markers were not placed correctly within the boundaries of the page, and there is no possibility to add markers’ IDs.
So I decided to write my own program to make pages of ArUco markers. You can find it below:
The idea of this program is to create an A4 page with ArUco markers (or April tags) placed on the grid. Several input parameters determine the size of this grid:
- mm. Marker/tag size in
- mm. Margin size between markers in
- Number of tags in X direction
- number of tags in y direction
- If the ID of the tag should be written above the tag or not
The program will determine whether the supplied parameters will produce a grid that will stay properly within the boundaries of an A4 page. If it is not the program will output an error message.
In addition to the above parameters, you can also supply these other parameters:
- image output file name
- ID of the first tag to be inserted on the page
- DPI of output image
- Type of marker or tag to generate
Here are some examples of how to generate a page with markers:
python aruco_gen_page.py -o "aruco_markers.png" -i 0 -t "DICT_5X5_50" -d 72 --write-id -x 3 -y 4
This will produce the following page:


python aruco_gen_page.py -o "aruco_markers.png" -i 10 -t "DICT_4X4_50" -d 72 -s 25 -m 10 --no-write-id -x 5 -y7
It gives the following result:


To ensure that the generated images are printed correctly when working with Windows 11, the following is the best approach for printing:
Go to the folder where the images with the markers were output. Then right-click on the icon and select ‘Print’. You will be presented with the following dialogue (probably not partially in Dutch, like mine is):

Here choose your printer, set the paper to A4, and choose ‘Standard’ quality and ‘Plain’ paper (though I suspect these settings may vary from printer to printer). Make sure ‘Full page photo’ is selected and ‘Fit picture to frame’ is checked.
I have not tested this program with other operating systems.
Before we can detect and estimate the pose of the markers, we need to calibrate our camera. See my article below for how to do this.
When you calibrate your camera, you need to get the camera matrix and distortion coefficient. I put these in a JSON file, as I have shown in the article.
Below is the program I used to find out and guess.
First, load the camera matrix and distortion coefficients from the JSON file.
Next, read a frame from the camera to determine the width and height of the images in the video stream.
Then, use the OpenCV functions cv2.getOptimatlNewCameraMatrix()
And cv2.unitUndistortRectifyMap()
To determine the area of interest roi
And this mapx
And mapy
parameter.
Inside an infinite loop, we read a frame from the video stream. This frame has been remapped (undivided) using mapx
And mapy
Parameters that we just got. Furthermore, the image is cropped to include only the region of interest.
Now we can start tracing the markers using opencv function cv2.aruco.detectMarkers()
, I have also included the possibility to use a grayscale version of the current image frame.
Then, if some markers are detected (noted by their corners), we can draw a box around the markers in the video frame Program cv2.aruco.drawDetectedMarkers()
including ID. Alternatively we can also draw the axes showing the pose of each marker in the frame cv2.estimatePoseSingleMarkers()
And cv2.aruco.drawFrameAxes()
, Read more about these functions Here And Here,
Finally, using cv2.imshow()
We show the frame in the GUI on the screen.
Below are two screenshots of the GUI showing that the program correctly determines the pose axes and the ID of each marker on the pages we generated earlier.


Most or all of the markers are detected when I hold the pages relatively still or even move the slightest. On the other hand, if I rapidly rotate the pages ahead of the camera, at some point, no marker will be found. My guess is that the Python implementation we’ve done doesn’t run fast enough to ‘follow’.
If this is true, and if we need to detect ArUco markers for use in, for example, an autonomous vehicle, then the implementation should be done in C++ fast enough to work.
“Creating ArUco Markers with OpenCV and Python” by Pyimagesearch
“Detecting ArUco Markers with OpenCV and Python” by Pyimagesearch
programming.vip. by “3D Pose Estimation Using Aruko Tags in Python”
“Python cv2.initUndistortRectifyMap() example” on Programcreek.com
[ad_2]
Source link
#started #Aruko #Markers