قالب وردپرس درنا توس
Home / Tips and Tricks / ARKit 101: Placing a Virtual TV and Playing a Video in Augmented Reality «Mobile AR News :: Next Reality

ARKit 101: Placing a Virtual TV and Playing a Video in Augmented Reality «Mobile AR News :: Next Reality



In an earlier tutorial we were able to place the Mona Lisa on vertical surfaces like walls, books and monitors with ARKit 1.5. By combining the features of Scene Kit and Sprite Kit (Apple's 2D graphics engine), ARKit lets you play video on a flat surface.

In this tutorial, you will learn how to use your augmented reality app for iPads and create iPhones with ARKit. Specifically, we discuss how we can play a video on a 3D TV in ARKit.

What will you learn?

We will learn how to play a video on a 2D plane with Scene Kit and Sprite Kit in ARKit.

Minimum Requirements

Step 1: Download the assets you need

To complete this tutorial, I've created, it's easier to follow A folder with the required 2D assets and the Swift file required for the project. These files ensure that they are not lost in this guide. Then download and unpack the compressed folder containing the assets.

Step 2: Set up the AR project in Xcode

If you do this If you are not sure how to do this, follow Step 2 in our post about controlling a 3D airplane hitTest to set up your AR project in Xcode. Give your project a different name, for example NextReality_Tutorial9 . Before you proceed with the following tutorial, do a quick test run:

Step 3: Importing Assets into Your Project

In the Project Navigator, click the Assets.xcassets folder. We will add our 2D images there. Then right-click on the left area of ​​the area on the right side of the project navigator. Select "Import" and add the file "overlay_grid.png" from the unzipped folder "Assets".

Then, right-click the art.scnassets folder where you keep your files in 3D SceneKit format. Next, select the option "Add files to & # 39; art.scnassets & # 39;". Then add the file "tv.dae" from the unzipped "Assets" folder you downloaded in step 1.

Next In the Project Navigator, right-click again on the yellow folder for "NextReality_Tutorial9" (or whatever you named your project). Then select the option "Add files to & # 39; NextReality_Tutorial9".

Then navigate to the unzipped "Assets" folder and select the "Grid.swift" file. Make sure the option "Copy items as needed" is checked, leaving all other settings unchanged. Then click on "Add".

This file helps you to render a picture of a grid for each vertical plane detected by ARKit. 19659014] Step 4: Use hitTest to Place 3D TV on the Detected Horizontal Plane

To quickly go through the ARKit plane detection functions, take a look at our tutorial on Horizontal Plane Detection.

Open the "ViewController.swift" class by double clicking on it. If you want to follow the last step 4 code, just open this link to view it on GitHub.

In the ViewController.swift file, change the scene build line to viewDidLoad () method. Modify it:

  let scene = SCNScene (named: "art.scnassets / ship.scn")! 

Below (to make sure that no scene is created with the default ship model):

  let scene = SCNScene () 

Next, find this line at the top of the file:

  @IBOutlet var sceneView: ARSCNView ! 

Add this line below this line to create an array of "grids" for all vertical levels detected:

  var grid = [Grid] () 

Copy and paste the following two methods at the end of the file before the last curly bracket (} ) insert the file. Using these methods, we can add our grid to the vertical planes that ARKit recognizes as a visual indicator.

  func renderer (_ renderer: SCNSceneRenderer, didAdd Node: SCNNode, for anchor: ARAnchor) {
Guard, plan anchor = anchor as? ARPlaneAnchor else {return}
let grid = grid (anchor: planeAnchor)
self.grids.append (grid)
node.addChildNode (grid)
}

func renderer (_Renderer: SCNSceneRenderer, node didUpdate: SCNNode, for anchor: ARAnchor) {
Guard, plan anchor = anchor as? ARPlaneAnchor else {return}
leave grid = self.grids.filter {grid in
return grid.anchor.identifier == planeAnchor.identifier
}.first

guard let foundGrid = raster other {
return
}

foundGrid.update (anchor: planeAnchor)
} 

Let's look briefly at what happens in these two methods:

  1. didAdd () is called when a new node is added to the ARSCNView . Here we add the imported grid image to each detected layer.
  2. The didUpdate () is called when later ARPlaneAnchor nodes are recognized or when the layer is expanded. In this case, we also want to update and extend our network. We do this here by updating to this specific grid . call

. Now we activate feature points. In this line in viewDidLoad () :

  sceneView.showsStatistics = true 

Add the following:

  sceneView.debugOptions = ARSCNDebugOptions.showFeaturePoints 

Next, we rotate the vertical Level detection. In this line in viewWillAppear () :

  let configuration = ARWorldTrackingConfiguration () 

add the following:

  configuration.planeDetection = .horizontal 

This is very important! This ensures that ARKit can detect horizontal planes in the real world. With the feature points we can see all 3D points that ARKit can recognize.

Now launch your app on your phone and walk around. Focus on a well-lit horizontal surface such as the floor or a table. You should be able to see blue grids when a horizontal plane is detected:

Next, gesture recognizers will be added to allow this [19659002Addthistotheendof viewDidLoad () :

  leave gestureRecognizer = UITapGestureRecognizer (target: self, action: #selector (tapped))
sceneView.addGestureRecognizer (gestureRecognizer) 

Now add the ear field () which converts the 2D coordinate from the tapped position of our phones to a 3D coordinate using the hit test.

Add to the end of the file, but before the last bracket:

  @objc func tapped (gesture: UITapGestureRecognizer) {
// Retrieve the 2D position of the touch event on the screen
let touchPosition = gesture.location (in: sceneView)

// Translate these 2D points to 3D points using hitTest (existing plane)
let hitTestResults = sceneView.hitTest (touchPosition, types: .existingPlaneUsingExtent)

guard let hitTest = hitTestResults.first else {
return
}
addTV (hitTest)
} 

Finally, the addTV () at the end of the file, but before the last bracket:

  func addTV (_ hitTestResult: ARHitTestResult) {
let scene = SCNScene (named: "art.scnassets / tv.scn")!
let tvNode = scene.rootNode.childNode (withName: "tv_node", recursive: true)
tvNode? .position = SCNVector3 (hitTestResult.worldTransform.columns.3.x, hitTestResult.worldTransform.columns.3.y, hitTestResult.worldTransform.columns.3.z)
self.sceneView.scene.rootNode.addChildNode (tvNode!)
} 

This method ensures that we add our 3D TV based on the 3D coordinate calculated by the hitTest. Launch the app and tap a detected horizontal layer. You should now be able to see a TV every time you tap, as follows:

Checkpoint : Your Complete The project to complete this step should look like the last code for step 4 on my GitHub.

Step 5: Play a video on our 3D TV!

What's cooler than watching a video on a phone? Watch a video in augmented reality on our phones! If you remember our last tutorial, we put the Mona Lisa on the wall. Use the same video from this tutorial and play it on our 3D TV.

Import the video into our project. In the project navigator, right-click on the yellow folder for "NextReality_Tutorial9" (or whatever you named your project). Select the option "Add files to & # 39; NextReality_Tutorial9". Select the "video.mov" file (you should see something like this):

Next, let's go back to our addTV ( ) method.

Directly above this line:

  self.sceneView.scene.rootNode.addChildNode (tvNode!) 

Add new code:

  let tvScreenPlaneNode = tvNode? .childNode (withName: "screen", recursive: true)
Let tvScreenPlaneNodeGeometry = tvScreenPlaneNode? geometry as! SCNPlane

let tvVideoNode = SKVideoNode (filename: "video.mov")
let videoScene = SKScene (size: .init (width: tvScreenPlaneNodeGeometry.width * 1000, height: tvScreenPlaneNodeGeometry.height * 1000))
videoScene.addChild (tvVideoNode)

tvVideoNode.position = CGPoint (x: videoScene.size.width / 2, y: videoScene.size.height / 2)
tvVideoNode.size = videoScene.size

let tvScreenMaterial = tvScreenPlaneNodeGeometry.materials.first (where: {$ 0.name == "video"})
tvScreenMaterial? .diffuse.contents = videoScene

tvVideoNode.play () 

Here we import our video into a SKVideoNode and add it to the scene kit scene. We will then set the correct size of this scene and attach it to our existing SCNNode TV node. This will ensure that our video scene is connected to our TV. Then we play the video.

Run the app again, and after placing the TV, the video should play and look something like this:

Checkpoint : Your entire project on The end of this step should look like the last code on step 5.

What we have achieved

Success! With the above steps, we were able to place a 3D TV in augmented reality and play a video on it with ARKit. Imagine the future effects of this type of AR dynamics. When AR goggles are mainstream, we can finally watch TV everywhere on big screens. This is already possible with devices like the HoloLens and the Magic Leap One, and now we have done it with ARKit right on our phones. Now try experimenting by taking things to the next level and transferring your own videos to the 3D TV.

If you need the full code for this project, you can find it in my GitHub repo. I hope you enjoyed this tutorial on ARKit. If you have comments or feedback, feel free to leave them in the comment section. Have fun coding!

Do not miss : * How to place 2D images like a painting or photo in augmented reality on a wall *

Cover pictures and screenshots of Ambuj Punn / Next Reality

Source link