-
Name:Shaojun HU
Title:Associate Professor
Office:Room 317, College of Information Engineering
Email:hsj AT nwsuaf DOT edu DOT cn
Researchgate Link: https://www.researchgate.net/profile/Shaojun_Hu
Personal Information Shaojun Hu is an Associate Professor in Northwest A&F University, China. His research interests include Computer Graphics and Human-Computer Interaction. He received Ph.D. degree from the Graduate School of Information Science and Technology at Iwate University, Japan in September 2009 and he was supervised by Prof. Norishige Chiba and Prof. Tadahiro Fujimoto. He worked with Prof. Michael Wimmer from October 2014 to February 2015 at the Institute of Computer Graphics and Algorithms, TU Wien, Austria. Then, he worked with Prof. Takeo Igarashi at the User Interface Research Group, the University of Tokyo as a visiting researcher from March 2016 to March 2017. In 2010, he joined the Department of Computer Science at Northwest A&F University as a Lecturer, and became an Associate Professor in 2014.
He served as the council member of Shaanxi Society of Image and Graphics, the committee member of Nicograph International 2018/2019/2023, IFIP-ICEC2020, ISID2021/2022, the local organizing committee chair of ICVR2023, and the reviewer of "IEEE TVCG", "IEEE GRSL", "Computer Graphics Forum", "The Visual Computer", "Computer & Graphics", "Computer Animation & Virtual Worlds" and "Computers & Electronics in Agriculture". He is also a member of ACM and China Computer Federation (CCF). He is teaching several courses including "C Programming Language", "Object-oriented Programming Using C++", "Data Structures and Algorithms", "Computer Graphics", "Virtual Reality Technology and Applications" and "Advanced 3D Modeling" for undergraduate and graduate students.
Shaojun Hu is running a Computer Graphics lab with Prof. Zhiyi Zhang and Prof. Nan Geng in Northwest A&F University, and has supervised 17 Master students and now he is supervising 13 Master students. He is responsible for several projects including the task of a National 863 Plan [2013AA10230402], Natural Science Foundation of China [61303124], Natural Science Basic Research Plan of Shaanxi [2015JQ6250], [2019JM-370], and the Fundamental Research Funds for the Central Universities [2452017343].
Research Directions Computer graphics, computer animation, natural phenomena Master Students (2019) Important Dates SPM 2020 (Strasbourg, France) (https://spm2020.sciencesconf.org/)
Abstract for full papers: January 15, 2020
Full paper submission: January 20, 2020
First review notification: February 28, 2020
Revised papers due: March 21, 2020
Final notification: April 10, 2020
Camera ready papers: April 24, 2020
Conference: June 2-4, 2020
CGI 2020 (Geneva, Switzerland)(http://www.cgs-network.org/cgi20/)
Submission Deadline: February 15, 2020
Notification of Acceptance: March 24, 2020
Camera-Ready Journal Papers: April 07, 2020
Conference Dates: JUNE 22-25, 2020
CASA2020 (Bournemouth, UK)(http://casa2020.bournemouth.ac.uk/)
Submission: March 16, 2020
Notification of acceptance: April 20, 2020
Camera ready: May 4, 2020
Conference dates: Jul 1-3, 2020
ACM SIGGRAPH 2020(Washington DC, USA)(https://s2020.siggraph.org/submissions/)
Technical papers (stage 1): 22 JANUARY, 2020
Technical papers (stage 2): 23 JANUARY, 2020
Technical papers (stage 3): 24 JANUARY, 2020
Conference dates: 19-23 July, 2020
ACM SIGGRAPH Asia 2020 (Daegu, South Korea)(https://sa2019.siggraph.org/about-us/sa2020)
Submissions Form Deadline: 19 May 2020 ?
Paper Deadline: 20 May 2020 ?
Upload Deadline: 21 May 2020 ?
Conference: 17 - 20 November 2020 ?
Exhibition: 18 - 20 November 2020 ?
ACM SIGCHI 2021 (Yokohama, Japan)(http://chi2021.acm.org/)
Paper deadline: Sept 10, 2020
Conference dates: May 8 – 13, 2021
ACM UIST 2020 (Minneapolis, USA)(http://uist.acm.org/uist2020/)
Paper Deadline: April 1, 2020, 5PM PDT
Conference dates: October 20-23, 2020
Eurographics 2021 (Vienna, Austria) (https://conferences.eg.org/eg2021/)
Abstract: September 26, 2020 ?
Submission: October 3, 2020 ?
Reviews available: November 21, 2020 ?
Rebuttal: November 28, 2020 ?
Notification to authors: December 12, 2020 ?
Conference date: 3rd to 7th May, 2021
Pacific Graphics 2020 (Wellington, New Zealand)(https://ecs.wgtn.ac.nz/Events/PG2020/)
Abstract submission: 5 June, 2020?
Regular paper submission: 7 June, 2020?
Reviews to authors: 12 July, 2020?
Conference dates: October 26-29, 2020
I3D 2020 (San Francisco, CA, USA)(https://i3dsymposium.github.io/2020/cfp.html)
Paper submission deadline: 13 December 2019
Extension for re-submissions: 20 December 2019
Notification of committee decisions: 10 February 2020
Conference dates: 5-7 May 2020
IEEE VR 2020 (Atlanta, Georgia, USA)(http://www.ieeevr.org/2020/)
Abstracts due: September 3, 2019
Submissions due: September 10, 2019
Final notifications: January 22, 2020
Conference dates: March 22nd - 26th, 2020
SCA 2020 (Montreal, QC, Canada)(http://computeranimation.org/)
Title and Abstract Submission: May 4,2020
Paper Submission: May 7,2020
Paper Notification: June 24,2020
Conference dates: 24-26 August, 2020
IEEE ICRA 2021 (Xi'an China) (http://2021.ieee-icra.org/)
Submission of all contributions: Sept. 15, 2020
Notification of acceptance: Jan. 15, 2021
Conference dates: May 16-22, 2021
VRST 2020 (Ottawa, Canada) (https://vrst.acm.org/vrst2020/index.html)
Submission due: ?
Conference dates: 1-4 November, 2020
Selected Publications A Semi-Automatic Oriental Ink Painting Framework for Robotic Drawing From 3D Models
Hao Jina, Minghui Liana, Shicheng Qiua, Xuxu Hana, Xizhi Zhaoa, Long Yanga, Zhiyi Zhanga,
Haoran Xieb, Kouichi Konnoc, Shaojun Hua*
a. College of Information Engineering, Northwest A&F University, China
b. JAIST, Ishikawa, Japanc. Faculty of Science and Engineering, Iwate University, Morioka, Japan
Abstract
Creating visually pleasing stylized ink paintings from 3D models is a challenge in robotic manipulation. We propose a semi-automatic framework that can extract expressive strokes from 3D models and draw them in oriental ink painting styles by using a robotic arm. The framework consists of a simulation stage and a robotic drawing stage. In the simulation stage, geometrical contours were automatically extracted from a certain viewpoint and a neural network was employed to create simplified contours. Then, expressive digital strokes were generated after interactive editing according to user's aesthetic understanding. In the robotic drawing stage, an optimization method was presented for drawing smooth and physically consistent strokes to the digital strokes, and two oriental ink painting styles termed as Noutan (shade) and Kasure (scratchiness) were applied to the strokes by robotic control of a brush's translation, dipping and scraping. Unlike existing methods that concentrate on generating paintings from 2D images, our framework has the advantage of rendering stylized ink paintings from 3D models by using a consumer-grade robotic arm. We evaluate the proposed framework by taking 3 standard models and a user-defined model as examples. The results show that our framework is able to draw visually pleasing oriental ink paintings with expressive strokes.
Hao Jin, Minghui Lian, Shicheng Qiu, Xuxu Han, Xizhi Zhao, Long Yang, Zhiyi Zhang, Haoran Xie, Kouichi Konno, Shaojun Hu*. A Semi-automatic Oriental Ink Painting Framework for Robotic Drawing from 3D Models. IEEE Robotics and Automation Letters, 8(10): 6667-6674, 2023. doi: 10.1109/LRA.2023.3311364. [Preprint][Link1][Link2][Video]
A Convex Hull-Based Feature Descriptor for Learning Tree Species Classification From ALS Point Clouds
Yanxing Lva, Yida Zhanga, Suying Donga, Long Yanga, Zhiyi Zhanga, Zhengrong Lib, Shaojun Hua*
a. College of Information Engineering, Northwest A&F University, China
b. Beijing New3S Technology Co. Ltd., Beijing, China
Abstract
Classifying tree species from point clouds acquired by light detection and ranging (LiDAR) scanning systems is important in many applications, including remote sensing, virtual reality, and forestry inventory. Compared with terrestrial laser scanning systems, airborne laser scanning (ALS) systems can acquire large-scale tree point clouds from only a single scan. However, ALS point clouds have the disadvantages of low density, uneven distribution, and unclear branch structure, making the classification of tree species from ALS point clouds a challenging task. Recently, deep learning-based classification approaches, such as PointNet++, which can operate directly on 3-D point sets, have been intensively studied in scene classification. However, the classification precision of learning-based approaches for point clouds relies on point coordinates and features, such as normals. Unlike the face normals of regular objects, trees have complex branch structures and detailed leaves, which are difficult to capture using ALS systems. Hence, it might be inappropriate to use the normals of ALS tree points for classification. In this letter, we propose a novel convex hull-based feature descriptor for tree species classification using the deep learning network PointNet++. To evaluate the effectiveness of our approach, three additional feature descriptors (normal descriptor, alpha shape-based descriptor, and covariance descriptor) are also investigated with PointNet++. The results show that the convex hull-based feature descriptor can achieve 86.6% overall accuracy in tree species classification, which is notably higher than the other three descriptors.
Yanxing Lv, Yida Zhang, Suying Dong, Long Yang, Zhiyi Zhang, Zhengrong Li, Shaojun Hu*. A Convex-Hull Based Feature Descriptor for Learning Tree Species Classification from ALS Point Clouds. IEEE Geoscience and Remote Sensing Letters, 2021. doi: 10.1109/LGRS.2021.3055773. [Preprint][Link]
Realistic Modeling of Tree Ramifications from an Optimal Manifold Control Mesh
Zhengyu Huanga, Zhiyi Zhanga, Nan Genga, Long Yanga, Dongjian Heb, Shaojun Hua*
a. College of Information Engineering, Northwest A&F University, China
b. College of Mechanical and Electronic Engineering, Northwest A&F Univerisity, ChinaAbstract
Modeling realistic branches and ramifications of trees is a challenging task because of their complex geometric structures. Many approaches have been proposed to generate plausible tree models from images, sketches, point clouds, and botanical rules. However, most approaches focus on a global impression of trees, such as the topological structure of branches and arrangement of leaves, without taking continuity of branch rami cations into consideration. To model a complete tree quadrilateral mesh (quad-mesh) with smooth ramifications, we propose an optimization method to calculate a suitable control mesh for Catmull-Clark subdivision. Given a tree's skeleton information, we build a local coordinate system for each joint node, and orient each node appropriately based on the angle between a parent branch and its child branch. Then, we create the corresponding basic ramification units using a cuboid-like quad-mesh, which is mapped back to the world coordinate. To obtain a suitable manifold initial control mesh as a main mesh, the ramifications are classified into main and additional ramifications, and a bottom-up optimization approach is applied to adjust the positions of the main ramification units when they connect their neibour. Next, the first round of Catmull-Clark subdivision is applied to the main ramifications. The additional ramifications, which were selected to alleviate visual distortion in the preceding step, are added back to the main mesh using a cut-paste operation. Finally, the second round of Catmull-Clark subdivision is used to generate the final quad-mesh of the entire tree. The results demonstrated that our method generated a realistic tree quad-mesh effectively from different tree skeletons.
Zhengyu Huang, Zhiyi Zhang, Nan Geng, Long Yang, Dongjian He, Shaojun Hu*. Realistic Modeling of Tree Ramifications from an Optimal Manifold Control Mesh. ICIG2019. Lecture Notes in Computer Science, vol 11902:316-332, 2019. Springer, Cham. (ICIG2019, oral paper acceptance rate = 8.6%). [Preprint][Link][Slides]Intelligent Chinese Calligraphy Beautification from Handwritten Characters for Robotic Writing
Xinyue Zhanga, Yuanhao Lia, Zhiyi Zhanga, Kouichi Konnob, Shaojun Hua*
a. College of Information Engineering, Northwest A&F University, China
b. Faculty of Engineering, Iwate University, JapanAbstract
Chinese calligraphy is the artistic expression of character writing and is highly valued in East Asia. However, it is a challenge for non-expert users to write visually pleasing calligraphy with his or her own unique style. In this paper, we develop an intelligent system that beautifies Chinese handwriting characters and physically writes them in a certain calligraphy style using a robotic arm. First, we sketch the handwriting characters using a mouse or a touch pad. Then, we employ a convolutional neural network to identify each stroke from the skeletons, and the corresponding standard stroke is retrieved from a pre-built calligraphy stroke library for robotic arm writing. To output aesthetically beautiful calligraphy with the user's style, we propose a global optimization approach to solve the minimization problem between the handwritten strokes and standard calligraphy strokes, in which a shape character vector is presented to describe the shape of standard strokes. Unlike existing systems that focus on the generation of digital calligraphy from handwritten characters, our system has the advantage of converting the user-input handwriting into physical calligraphy written by a robotic arm. We take the regular script (Kai) style as an example and perform a user study to evaluate the effectiveness of the system. The writing results show that our system can achieve visually pleasing calligraphy from various input handwriting while retaining the user's style.
Xinyue Zhang, Yuanhao Li, Zhiyi Zhang, Kouichi Konno and Shaojun Hu*. Intelligent Chinese Calligraphy Beautification from Handwritten Characters for Robotic Writing. The Visual Computer. 35(6–8): 1193–1205, 2019. doi: 10.1007/s00371-019-01675-w. Accepted by CGI2019 (acceptance rate = 21.6%). [Preprint][Link][Video][Slides]Efficient Tree Modeling from Airborne LiDAR Point Clouds
Shaojun Hua*, Zhengrong Lib, Zhiyi Zhanga, Dongjian Hec, Michael Wimmerd
a. College of Information Engineering, Northwest A&F University, China
b. Beijing New3S Technology Co.Ltd., China
c. College of Mechanical and Electronic Engineering, Northwest A&F University, China
d. Institute of Computer Graphics and Algorithms, Vienna University of Technology, Austria
Abstract
Modeling real-world trees is important in many application areas, including computer graphics, botany and forestry. An example of a modeling method is reconstruction from light detection and ranging (LiDAR) scans. In contrast to terrestrial LiDAR systems, airborne LiDAR systems – even current high-resolution systems – capture only very few samples on tree branches, which makes the reconstruction of trees from airborne LiDAR a challenging task. In this paper, we present a new method to model plausible trees with fine details from airborne LiDAR point clouds. To reconstruct tree models, first, we use a normalized cut method to segment an individual tree point cloud. Then, trunk points are added to supplement the incomplete point cloud, and a connected graph is constructed by searching sufficient nearest neighbors for each point. Based on the observation of real-world trees, a direction field is created to restrict branch directions. Then, branch skeletons are constructed using a bottom-up greedy algorithm with a priority queue, and leaves are arranged according to phyllotaxis. We demonstrate our method on a variety of examples and show that it can generate a plausible tree model in less than one second, in addition to preserving features of the original point cloud.
Shaojun Hu*, Zhengrong Li, Zhiyi Zhang, Dongjian He, Michael Wimmer. Efficient tree modeling from airborne LiDAR point clouds. Computer & Graphics, 2017. doi: 10.1016/j.cag.2017.04.004 (Invited to talk at SMI2018). [Link][Video][Binary][Appendix][Slides]
Acknowledgments
We would like to kindly thank Prof. Takeo Igarashi and the anonymous reviewers. This work was supported by the National 863 Plan [2013AA10230402], NSFC[61303124], NSBR Plan of Shaanxi [2015JQ6250], and Eurasia-Pacific Uninet Post-Doc Scholarship from OEAD.Data-driven Modeling and Animation of Outdoor Trees Through Interactive Approach
Shaojun Hua*, Zhiyi Zhanga, Haoran Xieb, Takeo Igarashib
a. College of Information Engineering, Northwest A&F University, China
b. Graduate School of Information Science and Technology, The University of Tokyo, Japan
Abstract
Computer animation of trees has widespread applications in the fields of film production, video games and virtual reality. Physics-based methods are feasible solutions to achieve good approximations of tree movements. However, realistically animating a specific tree in the real world remains a challenge since physics-based methods rely on dynamic properties that are difficult to measure. In this paper, we present a low-cost interactive approach to model and animate outdoor trees from photographs and videos, which can be captured using a smartphone or handheld camera. An interactive editing approach is proposed to reconstruct detailed branches from photographs by considering an epipolar constraint. To track the motions of branches and leaves, a semi-automatic tracking method is presented to allow the user to interactively correct mis-tracked features. Then, the physical parameters of branches and leaves are estimated using a fast Fourier transform, and these properties are applied to a simplified physics-based model to generate animations of trees with various external forces. We compare the animation results with reference videos on several examples and demonstrate that our approach can achieve realistic tree animation.
Shaojun Hu*, Zhiyi Zhang, Haoran Xie, Takeo Igarashi. Data-driven modeling and animation of outdoor trees through interactive approach. The Visual Computer, 2017. doi:10.1007/s00371-017-1377-6. (Accepted by CGI2017, acceptance rate = 20%) [Link][Preprint][Video][Binary][Appendix][Slides]
Acknowledgments
We thank Hironori Yoshida, Seung-tak Noh and the anonymous reviewers. This work was supported by NSFC [61303124], National 863 Plan [2013AA10230402] and NSBR Plan of Shaanxi [2015JQ6250].Motion Capture and Estimation of Dynamic Properties for Realistic Tree Animation
Shaojun Hua, Peng Hea, Dongjian Heb*
a. College of Information Engineering, Northwest A&F University, China
b. College of Mechanical and Electronic Engineering, Northwest A&F University, ChinaAbstract
The realistic animation of real-world trees is a challenging task because natural trees have various morphology and internal dynamic properties. In this paper, we present an approach to model and animate a specific tree by capturing the motion of its branches. We chose Kinect V2 to record both the RGB and depth of motion of branches with markers. To obtain the three-dimensional (3D) trajectory of branches, we used the mean-shift algorithm to track the markers from color images generated by projecting a textured point cloud onto the image plane, and then inversely mapped the tracking results in the image to 3D coordinates. Next, we performed a fast Fourier transform on the tracked 3D positions to estimate the dynamic properties (i.e., the natural frequency) of the branches. We constructed static tree models using a space colonization algorithm. Given the dynamic properties and static tree models, we demonstrated that our approach can produce realistic animation of trees in wind fields.
Shaojun Hu, Peng He, Dongjian He*. Motion capture and estimation of dynamic properties for realistic tree animation. AniNex2017, Bournemouth, U.K., 2017. [Preprint][Link]Relative Effects of Segregation and Recombination on the Evolution of Sex in Finite Diploid Populations
Xiaoqian Jianga,b,e, Shaojun Huc,e, Qi Xud, Yujun Changa,b and Shiheng Taoa,b
a. College of Life Science, Northwest A&F University, China
b. Bioinformatics Center, Northwest A&F University, China
c. College of Information Engineering, Northwest A&F University, China
d. College of Animal Science and Technology, Yangzhou University, China
e. These authors contributed equally to this workAbstract
The mechanism of reproducing more viable offspring in response to selection is a major factor influencing the advantages of sex. In diploids, sexual reproduction combines genotype by recombination and segregation. Theoretical studies of sexual reproduction have investigated the advantage of recombination in haploids. However, the potential advantage of segregation in diploids is less studied. This study aimed to quantify the relative contribution of recombination and segregation to the evolution of sex in finite diploids by using multilocus simulations. The mean fitness of a sexually or asexually reproduced population was calculated to describe the long-term effects of sex. The evolutionary fate of a sex or recombination modifier was also monitored to investigate the short-term effects of sex. Two different scenarios of mutations were considered: (1) only deleterious mutations were present and (2) a combination of deleterious and beneficial mutations. Results showed that the combined effects of segregation and recombination strongly contributed to the evolution of sex in diploids. If deleterious mutations were only present, segregation efficiently slowed down the speed of Muller’s ratchet. As the recombination level was increased, the accumulation of deleterious mutations was totally inhibited and recombination substantially contributed to the evolution of sex. The presence of beneficial mutations evidently increased the fixation rate of a recombination modifier. We also observed that the twofold cost of sex was easily to overcome in diploids if a sex modifier caused a moderate frequency of sex.
Xiaoqian Jiang1, Shaojun Hu1, Qi Xu, Yujun Chang, Shiheng Tao. Relative effects of segregation and recombination on the evolution of sex in finite diploid populations. Heredity. 111: 505-512, 2013. doi:10.1038/hdy.2013.72. [Link][Binary]
Acknowledgments
Special thanks to Baolin Mu for his help in improving the speed of our computer program. We are grateful to the members at the Bioinformatics Center of Northwest A&F University for their generosity in providing their computer clusters to run our simulations. We also thank three anonymous reviewers for their constructive comments.
Realistic Animation of Interactive Trees
Shaojun Hua*, Norishige Chibab, Dongjian Hec
a. College of Information Engineering, Northwest A&F University, China
b. Dept. of Computer and Information Sciences, Iwate University, Japan
c. Mechanical and Electronic Engineering, Northwest A&F University, ChinaAbstract
We present a mathematical model for animating trees realistically by taking into account the influence of natural frequencies and damping ratios. To create realistic motion of branches, we choose three basic mode shapes from the modal analysis of a curved beam, and combine them with a driven harmonic oscillator to approximate Lissajous curve which is observed in pull-and-release test of real trees. The forced vibration of trees is animated by utilizing local coordinate transformation before applying the forced vibration model of curved beams. In addition, we assume petioles are flexible to create natural motion of leaves. A wind field is generated by three-dimensional fBm noises to interact with the trees. Besides, our animation model allows users to interactively manipulate trees. We demonstrate several examples to show the realistic motion of interactive trees without using pre-computation or GPU acceleration. Various motions of trees can be achieved by choosing different combinations of natural frequencies and damping ratios according to tree species and seasons.
Shaojun Hu*, Norishige Chiba, Dongjian He. Realistic animation of interactive trees. The Visual Computer, 2012. doi: 10.1007/s00371-012-0694-z. (Accepted by CGI2012, acceptance rate = 18%) [Link][Preprint][Video1][Video2][Binary][Slides]
Acknowledgments
The authors would like to thank anonymous reviewers for their helpful suggestions. This work was partially supported by the Doctoral Start-up Funds (2010BSJJ059), the Fundamental Research Funds (QN2011135) of Northwest A&F University, and the National Science & Technology Supporting Plan of China (2011BAD29B08).
Pseudo-dynamics Model of a Cantilever Beam for Animating Flexible Leaves and Branches in Wind Field
Shaojun Hua*, Tadahiro Fujimotoa, Norishige Chibaa
a. Faculty of Engineering, Iwate University, 4-3-5 Ueda, Morioka, JapanAbstract
We present a pseudo-dynamics model of a cantilever beam to visually simulate motions of leaves and branches in a wind field by considering the influence of natural frequency (f0) and damping ratio (e). Our pseudo-dynamics model consists of a static equilibrium model, which can handle the bending of a curved beam loaded by an arbitrary force in three-dimensions, and a dynamic motion model that describes the dynamic response of the beam subjected to turbulence. Using the static equilibrium model, we can apply it to controlling the free bending of petioles and branches. Furthermore, we extend it to a surface deformation model that can deform some flexible laminae. Based on a mass spring system, we analyze the property of dynamic response of a cantilever beam in turbulence with various combinations of f0 and e, and we give some guidelines to determine the combination types of branches and leaves according to their shapes and stiffness. The main advantage of our techniques is that we are able to deform curved branches and some flexible leaves dynamically by taking account of their structures. Finally, we demonstrate that our proposed method is effective by showing various motions of leaves and branches with different model
Shaojun Hu*, Tadahiro Fujimoto, Pseudo-dynamics model of a cantilever beam for animating flexible leaves and branches in wind field. Computer Animation and Virtual Worlds, 2009. doi: 10.1002/cav.309. (Accepted by CASA2009, acceptance rate = 33%) [Link][Preprint][Video1][Video2][Video3][Binary][Slides]
Acknowledgments
The authors would like to thank anonymous reviewers for their helpful suggestions. This work was supported by the Ministry of Education, Science, Sports and Culture, Japan with Grant No. 19300022.