قالب وردپرس درنا توس
Home / Tips and Tricks / ARKit 101: How to put 2D images like a painting or a photo in augmented reality on the wall «Mobile AR News :: Next Reality

ARKit 101: How to put 2D images like a painting or a photo in augmented reality on the wall «Mobile AR News :: Next Reality



In a previous tutorial we were able to use ARKit 1.5 to measure vertical surfaces such as walls, books and monitors. With the advent of vertical plane anchors, we can now also attach objects to these vertical walls.

Learn how to build your augmented reality app for iPads and iPhones using ARKit. In particular, we will talk about how we can place virtual images like the Mona Lisa on our walls.

What will you learn?

We'll learn how to place 2D images on vertical walls through SceneKit Using ARKit 1.5

Minimum Requirements

Step 1: Download the required assets

To help you understand this tutorial I've created a folder with the required 2D assets and Swift files that are needed for the project. These files make sure that they are not lost in this manual, so download the zipped folder with the assets and unzip it.

Step 2: Set Up the AR Project in Xcode

If not sure how to do it, follow Step 2 in our post about controlling a 3D layer using hitTest to get your AR project in Xcode set up. Give your project a different name. B. NextReality_Tutorial8 . Be sure to do a quick test before continuing with the following tutorial.

Step 3: Import assets into your project

In the Project Navigator, click the Assets.xcassets folder. We will add our 2D images here. Then right-click on the left area of ​​the area on the right side of the project navigator. Select "Import" and add the files "overlay_grid.png" and "mona-lisa.jpg" from the unzipped folder "Assets".

Next, right-click again on the yellow folder for "NextReality_Tutorial8" (or whatever) in the Project Navigator You named your project). Select the "Add files to NextReality_Tutorial8" option.

Navigate to the unzipped "Assets" folder and select the "Grid.Swift" file, be sure to check "Copy items on demand" and leave everything else as it is, then click "Add."

"Grid.swift" should now be added to your project The project navigator should look like this:

This file helps render an image of a grid for each vertical plane ARKit detects

Step 4: Placing a grid to display detected vertical planes

To quickly over he's talking about ARKit's ability to detect planes, take a look at our horizontal plane detection tutorial . Although this covers the detection of the horizontal plane, the strategies and logic for detecting vertical planes are quite similar.

Note : This step will be quite similar to the fourth step from the previous vertical level article.

Open the ViewController.swift class by double-clicking it. If you want to follow the final code from step 4, just open this link to see it on GitHub.

In the ViewController.swift file, change the scene build line to viewDidLoad () ] method. Change it from:

  let scene = SCNScene (called: "art.scnassets / ship.scn")! 

To the following (which makes sure we do not create a scene with the default ship model):

  lost scene = SCNScene () 

Next, find this line at the top of the file:

  @IBOutlet var sceneView : ARSCNView! 

Under this line, add this line to create an array of "Grid's" all vertical levels detected:

  var grids = [Grid] () 

Copy the following two methods listed below and paste them at the end of the file before the last curly brace (} ) the file. Using these methods, we can add our grid to the vertical planes recognized by ARKit as a visual indicator.

  func renderer (_ Renderer: SCNSceneRenderer, didAdd Node: SCNNode, for anchor: ARAnchor) {19659031
Guard leave plane anchor = anchor like? ARPlaneAnchor, planeAnchor.alignment == .vertical else {return}
let grid = grid (anchor: planeAnchor)
self.grids.append (grid)
node.addChildNode (grid)
}

func renderer (_ Renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
Guard leave plane anchor = anchor like? ARPlaneAnchor, planeAnchor.alignment == .vertical else {return}
let grid = self.grids.filter {grid in
return grid.anchor.identifier == planeAnchor.identifier
}.first

guard let foundGrid = grid else {
return
}

foundGrid.update (anchor: level anchor)
} 

Let's quickly talk about what happens with these two methods: [196909032] The didAdd () is called when a new node is added to the ARSCNView . Here we make sure that the detected ARPlaneAnchor corresponds to a vertical plane and add it as our Grid object, which adds the raster image we imported to each detected plane.

  • didUpdate () is called whenever newer ARPlaneAnchor nodes are detected (again, we make sure that they correspond to vertical planes), or when the aircraft expands. In this case, we also want to bring our network up to date and expand. We do this by calling update () to this special grid .
  • Now we activate feature points. Under this line in viewDidLoad () :

      sceneView.showsStatistics = true 

    Add:

      sceneView.debugOptions = ARSCNDebugOptions.showFeaturePoints 

    Next, we enable detection of vertical planes. Under this line in viewWillAppear () :

      let configuration = ARWorldTrackingConfiguration () 

    Add:

      configuration.planeDetection = .vertical 

    That's very important! This ensures that ARKit can detect vertical planes in the real world. The feature points allow us to see all the 3D points that ARKit can recognize.

    Now launch your app on your smartphone and walk around. Focus on a well-lit wall or flat, vertical surface; When a vertical plane is detected, blue grids should be displayed:

    Checkpoint : Yours entire project at the end of this step should look like the last step 4 code on my GitHub.

    Step 5: Use hitTest to place the Mona Lisa on a wall

    Have you ever seen the Mona Lisa? It is a miracle to see, though it is very small in person. For centuries, the whole world has spoken a lot about it, and now you can hang it on your wall in your apartment.

    We'll use our old friend, hitTest to set up the Mona Lisa image a detected vertical wall

    First, we add gesture recognition in our scene view. Open the "ViewController.swift" class (all work from here will include this file) and add the following lines at the end of the viewDidLoad () method:

      let gesticRecognizer = UITapGestureRecognizer ( Target: self, action: #selector (touched))
    sceneView.addGestureRecognizer (GestenRecognizer) 

    Next, we add the method getapped () which should be called when a jog gesture is registered on the phone. Add the following code at the end of the file, but before the last curly brace (} ):

      @objc func tapped (Gest: UITapGestureRecognizer) {
    // Receive 2D position of the touch event on the screen
    leave touchPosition = gesten.stelle (in: sceneView)
    
    // Translate these 2D points to 3D points with hitTest (existing layer)
    hitTestResults = sceneView.hitTest (touchPosition, types: .existingPlaneUsingExtent)
    
    // Get hitTest results and make sure the hitTest matches a grid placed on a wall
    Guardian left hitTest = hitTestResults.first, anchor = hitTest.anchor? ARPlaneAnchor, let gridIndex = grids.index (where: {$ 0 Anchor == anchor}) else {
    return
    }
    addPainting (hitTest, Raster [gridIndex])
    } 

    Here we basically translate the 2D points to which the touch gesture on the iPhone screen refers to with the hitTest to real 3D points. We make sure that existingPlaneUsingExtent hitTest type is used to allow the results to come only from the levels that were recognized by their dimensions (extent) (by displaying our grids). We then make sure that the anchor recognized by the hitTest is a ARPlaneAnchor and not some feature point, and that the anchor itself correlates to a raster that has already been recognized and displayed. Finally, we call the method addPainting () to actually put the picture on the wall.

    Add the addPainting () under the tapped () method, but before the last curly bracket (} ) in the file:

      func addPainting (_ hitResult: ARHitTestResult, _ grid: Grid) {
    // 1.
    level planeGeometry = SCNPlane (Width: 0.2, Height: 0.35)
    Leave Material = SCNMaterial ()
    material.diffuse.contents = UIImage (named: "mona-lisa")
    planeGeometry.materials = [material]
    
    // 2.
    lost paintingNode = SCNNode (geometry: planeGeometry)
    paintingNode.transform = SCNMatrix4 (hitResult.anchor! .transform)
    PaintingNode.eulerAngles = SCNVector3 (PaintingNode.EuelerAngles.x + (-Float.pi / 2), PaintingNode.eulerAngles.y, paintingNode.eulerAngles.z)
    paintingNode.position = SCNVector3 (hitResult.worldTransform.column.3.x, hitResult.worldTransform.column.3.y, hitResult.worldTransform.column.3.z)
    
    sceneView.scene.rootNode.addChildNode (painting node)
    grid.removeFromParentNode ()
    } 

    Here are some things to do:

    1. We have created a 2D plane geometry SCNPlane and given mona-lisa image we downloaded before as background [19659008] We then set the geometry to a new SCNNode called paintingNode . We have made sure that the transformation of this node (that is, the combination of rotation, position, and scale properties of the node) is set to the value of the transformation value of the anchor of hitResult . Then we set the Euler angles accordingly (this defines the angle and rotation of the node based on the x, y, and z axes). Finally, we set the position of the node (the painting) based on the hitTest result to the point where the gesture was made.

    After inserting paintingNode into the scene, we remove the grid It's on so we can admire the Mona Lisa without blue grid lines!

    Save and start the app. Walk around and find a well-lit, textured vertical flat surface like a wall.

    Note: You may need to get up very close to a wall, as most walls do not have the texture that ARKit needs to detect vertical planes. I would suggest finding a window or other colored vertical surface for ARKit to recognize the layers. After discovering a wall through the blue grid, tap in an area on the grid to place the Mona Lisa on the wall. You should see something like this:

    Checkpoint : Your entire project at the end of this step should look like the last step 5 code on my GitHub.

    What we have achieved

    Well done! They were able to successfully detect a wall with ARKit 1.5 and place an object on it! Is not it wonderful what ARKit can do? Thanks to Apple's update, ARKit completely hides the hassles of coping with complicated math and computer vision logic. With this tutorial we were able to place a detailed picture of the Mona Lisa on our wall in our own house. Is not that nice? You can play around freely by placing different pictures and resizing them.

    If you need the full code for this project, you can find it in my GitHub Repository . I hope you liked this tutorial on ARKit. If you have comments or feedback, you are welcome to leave it in the comment section. Happy coding!

    Do not Miss : How to Measure Walls with ARKit 1.5

    Cover Picture & Screenshots by Ambuj Punn / Next Reality

    Source link