IJSTR

International Journal of Scientific & Technology Research

IJSTR@Facebook IJSTR@Twitter IJSTR@Linkedin
Home About Us Scope Editorial Board Blog/Latest News Contact Us
Scopus/Elsevier
CALL FOR PAPERS
AUTHORS
DOWNLOADS
CONTACT
QR CODE
IJSTR-QR Code

IJSTR >> Volume 9 - Issue 1, January 2020 Edition



International Journal of Scientific & Technology Research  
International Journal of Scientific & Technology Research

Website: http://www.ijstr.org

ISSN 2277-8616



Real World And Virtual Object Obstacle In Augmented Reality Using Scene Perception

[Full Text]

 

AUTHOR(S)

Aninditya Anggari Nuryono, Alfian Ma’arif, Siti Fatimah Anggrahini

 

KEYWORDS

Augmented Reality, Autonomous Agent, Scene Perception, Markerless, Pathfinding, Mesh, Intel RealSense

 

ABSTRACT

Augmented Reality is a technique for combining digital content with the real world in real-time. Intel RealSense 3D cameras are used to produce digital content in markerless based Augmented Reality. This camera reconstructs a real environment in three dimensions. Scene perception is a method for reconstructing real environments in three dimensions. Utilization of this camera in Augmented Reality in the form of an autonomous agent. An autonomous agent has a navigation function to get to the destination point by searching for paths called pathfinding. Autonomous agents have three behaviors, namely, seek, arrive, and action selection. These behaviors are used autonomous agents to get to the destination point by avoiding virtual and real obstacles that exist in the real world. The scene perception method is used to make a mesh. This mesh is a virtual grid in the real world that is used as an Augmented Reality area. The navigation results of the autonomous agent using the scene perception method in Augmented Reality can work properly. Autonomous agents can go to their destination point by avoiding virtual and real obstacles.

 

REFERENCES

[1] L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel(R) RealSense(TM) Stereoscopic Depth Cameras,” 2017 IEEE Conf. Comput. Vis. Pattern Recognit. Work., pp. 1267–1276, 2017.
[2] I. Iswanto, O. Wahyunggoro, and A. Cahyadi, “Path Planning Based on Fuzzy Decision Trees and Potential Field,” Int. J. Electr. Comput. Eng., vol. 6, p. 212, 2016.
[3] A. S. Anisyah, P. H. Rusmin, and H. Hindersah, “Route optimization movement of tugboat with A* tactical pathfinding in SPIN 3D simulation,” in 2015 4th International Conference on Interactive Digital Media (ICIDM), 2015, pp. 1–5.
[4] S. Bedoya-Rodriguez, C. Gomez-Urbano, A. Uribe-Quevedoy, and C. Quintero, “Augmented reality RPG card-based game,” in 2014 IEEE Games Media Entertainment, 2014, pp. 1–4.
[5] Y. Ariyana and A. I. Wuryandari, “Basic 3D interaction techniques in Augmented Reality,” in 2012 International Conference on System Engineering and Technology (ICSET), 2012, pp. 1–6.
[6] A. Gianibelli, I. Carlucho, M. D. Paula, and G. G. Acosta, “An obstacle avoidance system for mobile robotics based on the virtual force field method,” in 2018 IEEE Biennial Congress of Argentina (ARGENCON), 2018, pp. 1–8.
[7] A. Akaydın and U. Güdükbay, “Adaptive grids: an image-based approach to generate navigation meshes,” Opt. Eng., vol. 52, no. 2, p. 027002, 2013.
[8] I. Iswanto, A. Ma’arif, O. Wahyunggoro, and A. Imam, “Artificial Potential Field Algorithm Implementation for Quadrotor Path Planning,” Int. J. Adv. Comput. Sci. Appl., vol. 10, no. 8, pp. 575–585, 2019.
[9] I. Iswanto, O. Wahyunggoro, and A. Cahyadi, “Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm,” in AIP Conference Proceedings, 2017, vol. 1831, p. 20060.
[10] Q. Lin, X. Wang, and Y. Wang, “Cooperative Formation and Obstacle Avoidance Algorithm for Multi-UAV System in 3D Environment,” in 2018 37th Chinese Control Conference (CCC), 2018, pp. 6943–6948.
[11] O. Wahyunggoro, T. Adji, A. Cahyadi, I. Ardiyanto, and I. Iswanto, “Local information using stereo camera in artificial potential field based path planning,” IAENG Int. J. Comput. Sci., vol. 44, pp. 316–326, 2017.
[12] M. Ramirez, E. Ramos, O. Cruz, J. Hernandez, E. Perez-Cordoba, and M. Garcia, “Design of interactive museographic exhibits using Augmented reality,” in 23rd International Conference on Electronics, Communications and Computing, CONIELECOMP 2013, 2013, pp. 1–6.
[13] B. Patrão, L. Cruz, and N. Gonçalves, “An Augmented Reality Application Using Graphic Code Markers,” in 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), 2018, pp. 193–194.
[14] C. Lim, C. Kim, J. Park, and H. Park, “Mobile Augmented Reality Based on Invisible Marker,” in 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), 2016, pp. 78–81.
[15] R. Furlan, “The future of augmented reality: Hololens - Microsoft’s AR headset shines despite rough edges [Resources_Tools and Toys],” IEEE Spectr., vol. 53, no. 6, p. 21, 2016.
[16] T. Araújo et al., “Life Cycle of a SLAM System: Implementation, Evaluation and Port to the Project Tango Device,” in 2016 XVIII Symposium on Virtual and Augmented Reality (SVR), 2016, pp. 10–19.
[17] M. Mikhail and N. Carmack, “Navigation software system development for a mobile robot to avoid obstacles in a dynamic environment using laser sensor,” in SoutheastCon 2017, 2017, pp. 1–8.
[18] C. Reynolds, “Steering Behaviors For Autonomous Characters,” in Game Developers, 1999, pp. 763–782.
[19] P. Norvig and S. J. Russell, Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall, 2010.