As a small Christmas present, I want to show you how easy it has become to make Augmented Reality yourself thanks to Ogre and OpenCV. You should know that my other interest, besides graphics, lies with Computer Vision.
The demo will not rely on proprietary solutions like ARCore or ARKit – all will be done with open-source code that you can inspect an learn from. But lets start with a teaser:
This demo can be put together in less than 50 lines of code, thanks to the OpenCV ovis module that glues Ogre and OpenCV together. Next, I will briefly walk you through the steps that are needed:
First, we have to capture some images to get by the Reality part in AR. Here, OpenCV provides us an unified API that you can use for your Webcam, Industrial Cameras or a pre-recorded video:
import cv2 as cv
imsize = (1280, 720) # the resolution to use
cap = cv.VideoCapture(0)
cap.set(cv.CAP_PROP_FRAME_WIDTH, imsize[0])
cap.set(cv.CAP_PROP_FRAME_HEIGHT, imsize[1])
img = cap.read()[1] # grab an image
then, we have to set up the most crucial part in AR: camera tracking. For this, we will use the ArUco markers – the QR-like quads that surround Sinbad. To no surprise, OpenCV comes with this vision algorithm:
adict = cv.aruco.Dictionary_get(cv.aruco.DICT_4X4_50)
# extract 2D marker-corners from image
corners, ids = cv.aruco.detectMarkers(img, adict)[:2]
# convert corners to 3D transformations [R|t]
rvecs, tvecs = cv.aruco.estimatePoseSingleMarkers(corners, 5, K, None)[:2]
If you look closely, you see that we are using a undefined variable "K"
– this is the intrinsic matrix specific for your camera. If you want precise results, you should calibrate your camera to measure those. For instance using the web-service at calibdb.net, which will also just give you the parameters, if your camera is already known.
However, if you just want to continue, you can use the following values that should roughly match any webcam at 1280x720px
import numpy as np
K = np.array(((1000, 0, 640), (0, 1000, 360), (0, 0, 1.)))
So now we have the image and the according 3D transformation for the camera – only the Augmented part is missing. This is where Ogre/ ovis come into play:
# reference the 3D mesh resources
cv.ovis.addResourceLocation("packs/Sinbad.zip")
# create an Ogre window for rendering
win = cv.ovis.createWindow("OgreWindow", imsize, flags=cv.ovis.WINDOW_SCENE_AA)
win.setBackground(img)
# make Ogre renderings match your camera images
win.setCameraIntrinsics(K, imsize)
# create the virtual scene, consisting of Sinbad and a light
win.createEntity("figure", "Sinbad.mesh", tvec=(0, 0, 5), rot=(1.57, 0, 0))
win.createLightEntity("sun", tvec=(0, 0, 100))
# position the camera according to the first marker detected
win.setCameraPose(tvecs[0].ravel(), rvecs[0].ravel(), invert=True)
- You can find the full source-code for the above steps, nicely combined into a main-loop over at OpenCV.
- Alternatively, see this Ogre Sample which works with both Ogre and OpenCV installed via pip
To record the results, you can use win.getScreenshot()
and dump it into a cv.VideoWriter
– contrary to the name, this works in real-time.
Extending the above code to use cv.aruco.GridBoard
as done in the teaser video is left as an exercise for the reader as this is more on the OpenCV side.
Also, If you rather want to use ARCore on Android, there is a Sample how to use the SurfaceTexture with Ogre. Using this, you should be able to modify the hello_ar_java sample from the arcore-sdk to use Ogre.