Mobirise

Ying Ma

Assistant Professor
School of information and Electronics
Beijing Institute of Technology 

Email: mayingbit2011@163.com
Office: 10#Building office 303

About Me

I joined the School of Information and Electronics, Beijing Institute of Technology from November 2022. Before that, I'm a tenure-track assistant professor at Department of Electrical and Computer Engineering at the University of Central Florida in Orlando from August 2021. I received my Ph.D. degree in 2021 from the University of Florida under the supervision of Prof.Jose Principe. During my Ph.D., I worked as a research summer intern at Apple Siri Understanding in 2019 and at Google in 2020 and 2021. Previously, I was a visiting student at University of Southampton in 2015, where I was an advised by Prof.Lajos Hanzo and Prof.Sheng Chen.

My primary research interest is deep learning (in particular, light-weighted and lifelong learning) with applications to signal processing,natural language processing (NLP) and computer vision (CV). My research focuses on developing new ML architectures to bridge the gap between theoretical developments and real-world applications.

Research

Our primary research interest is deep learning (in particular, light-weighted and lifelong learning) with applications to signal processing natural language processing (NLP) and computer vision (CV).

Machine learning for signal processing

2024-now, PI, founded by NSF of China
2024-now, PI, founded by National Key Research and Development Program of China

Mobirise

 Joint treatment of multiple modalities utilize more comprehensive information, therefore achieving better performance. This project aims to learn informative but concise representations from unlimited multimodal data by utilizing an external memory.  We particularly focus on applications such as specific emitter identification and localization.

Memory augmented network for lifelong learning

[Supported by DARPA L2M Project]

Video Description

Autonomous Vision System

This system employs a self-organizing object detector as a front end to deconstruct the environment. First, the environment is decomposed into objects; then, our IMRL (internal memory augmented RL) network is adopted to learn the objects’ affordance (i.e., a sequence of actions to manipulate these objects). The learned objects, along with their properties, are stored in the external memory and utilized when those same objects are detected in an unseen environment. This autonomous vision system is one step closer to lifelong learning because it learns continuously during execution and becomes increasingly adept at performing tasks without forgetting its previous experiences. 

Y.Ma, J.Brooks, H.Li, J. Principe, “Procedural Memory Augmented Deep Reinforcement Learning,” accepted by IEEE Trans. Artificial Intelligence. (TAI). 
H.Li, Y.Ma, J.C.Principe, “Cognitive Architecture for Videogames,” IEEE IJCNN 2020.

Mobirise

Image Description

A Taxonomy For Memory Networks

Many dynamic neural networks with new memory structures have recently emerged to address these problems. We proposed a systematic approach [Tax] to analyze and compare the underlying memory structures of popular recurrent neural networks, including vanilla RNN, LSTM, neural stack [NS] and neural Turing machine, neural attention models, etc. Our analysis provided a window into the “black box” of recurrent neural networks from a memory usage prospective. Accordingly, we developed a taxonomy for these networks and their variants, and revealed their intrinsic inclusion relationships mathematically. To help users select an appropriate architecture, we also connected the relative expressive power of models to the memory requirements of different tasks, including sentiment analysis, question answering, and others.

[Tax] Y. Ma, J. Principe. “A Taxonomy for Neural Memory Networks,” IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), vol. 31, no. 16, pp. 1780-1793, 2020.
[NS] Y. Ma, J. Principe, “Comparison of Static Neural Network with External Memory and RNNs for Deterministic Context Free Language Learning,” IEEE IJCNN 2018

Mobirise

Image Description

Internal Memory augmented Deep RL

We augmented the current deep RL networks by introducing an extra gate mechanism. In addition, we introduced an internal reward mechanism to promote longer macro actions. Using these two techniques, our agent can interact less with the environment and explore in a more structured manner, thereby obtaining higher cumulative reward. These two techniques can be applied to all deep RL architectures. In our experiments, we include these two techniques in an Asynchronous Actor-Critic Agents (A3C) network and test them with Atari games. The results show improvements in terms of computational efficiency and performance.

Underwater image segmentation and classification

[funded by the National Oceanic and Atmospheric Administration (NOAA)].

Mobirise

To recognize biota and coral distributions at Pully Ridge HAPC, a mesophotic reef in the Gulf of Mexico, we developed convolutional neural network-based image segmentation and classification methods to recognize the amount and health states of corals with sparse labels [coral1][coral2]. As an extension of this work, we applied our sequential neural network to learn plant-development syntactic patterns and automatically identify plant species [coral3]. This is an important first step not only for identifying plants but also for automatically generating realistic plant models from observations.

[coral1] Y. Xi, Y. Ma, S. Farrington, J. Reed, B. Ouyang, J. Principe, “Fast Segmentation for Large and Sparsely Labeled Coral Images,” IEEE IJCNN 2019.
[coral2] Y. Ma, B. Ouyang, S. Farrington, S. Yu, J. Reed, J. Principe, “Joint Segmentation and Classification with Application to Cluttered Coral Images,” IEEE GlobalSIP 2019.
[coral3] K. li, Y. Ma, J. Principe, “Automatic Plant Identification Using Spatio-Temporal Evolution Model (STEM) Automata,” IEEE MLSP, 2017.

Graph Neural Network

[collaborated with Apple]

Mobirise

Image Description

Graph neural networks are an extension of sequential neural networks for non-Euclidean data structures. In this work, we address the task of false trigger mitigation and intent classification based on analyzing automatic speech recognition lattices using graph neural networks (GNNs), i.e., graph convolutional neural networks and graph attention neural networks. This work was used to improve the performance of Apple’s Siri application.

P.Dighe, S.Adya, N.Li, S.Vishnubhotla, D.Naik, A.Sagar, Y.Ma, S.Pulman, J.Williams, “Lattice-based Improvements for Voice Triggering Using Graph Neural Networks,” IEEE ICASSP 2020.

Question Answering on Knowledge Graph with RL

[collaborated with Google]

Mobirise

Image Description

 We applied deep RL network to navigate a knowledge graph conditioned on the input question by defining a Markov decision process on the knowledge graph. An agent travels from a starting vertex to the ending vertex by choosing edges to find predictive paths and answers. Different from other environments, the knowledge graph has a large action space and missing edges. To customize to this application, we combined our RL network with the traditional embedding-based method for QA on KG to obtain better performance.

Machine Intelligence and Deep learning (MIDL) Laboratory

We are establishing a research group (MIDL) at Beijing institute of Technology. This lab was moved from University of Central Florida (ranking 58 in U.S. News and 43 in area of Artificial Intelligence and Machine Learning in CSranking.org.), ECE, UCF. Our primary research interest is deep learning (in particular, light-weighted and lifelong learning) with applications to signal processing, natural language processing (NLP) and computer vision (CV). Although a surging burst of ML research has occurred over the past several years, machine learning based artificial intelligence (AI) is still far less capable than human intelligence, and hence many real-world practical problems have not been solved properly by AI. Therefore, our research focuses on developing new ML architectures to bridge the gap between theoretical developments and real-world applications. We have collaborated with researchers from very diverse fields, including ocean engineering, wireless communications, cognitive science to apply machine learning algorithms to solve real-world problems.  

Opennings

  • Prospective students:
    I am looking for strong and highly motivated master and Ph.D. students who are interesting in machine learning and deep learning. If you are interested in working with me, please contact me through email and send me your CV and sample publications if any.
  • UCF students: 
    For students at Beijing institute of Technology, multiple part-time and volunteer research assistant positions are available. You will have the priority to join our lab if you work well on the project we have collaborated on. If you are interested, please send me your CV. 

Team Member

Current students:
2022:Jie Yang, Yiyang Li, Runzhou Li
2023: Huajie Wu, Yanqing Zhao, Zehao Wang

Students from UCF:

Anuj Sarda
Master

Research Interest: Reinforcement Learning

Owen Burns
Undergraduate

Research Interest: Question Answering on Knowledge Graph

Judd Brock-Edgar
Undergraduate

Research Interest: Reinforcement Learning

Jose Hoyos Sanchez
Undergraduate

Research Interest: Reinforcement Learning

Teaching and Mentoring

Instructor

  • EEL 6812 Introduction to Neural Network and Deep Learning. Spring 2022
  • EEL 5825 Pattern Recognition and Machine Learning. Fall 2021
  • EEL 3552c Signal Analysis and Analog Communication. Fall 2021, Spring 2022.

Undergraduate Research: Chess AI

I'm mentoring a undergraduate research team working a project designing for an autonomous chessboard to realize a custom AI chess algorithm. We are implementing an RL (A3C and Deep Q) based method to gives command to electromechanical devices through firmware, which will ultimately be used to move physical chess pieces across a chessboard in real-time. The end goal of our project is to present it in the engineering building for students to compete against, gather findings from, and to show off the expertise of our student engineers.

Mobirise

Circuit Schematic

Mobirise

Chessboard/Firmware

Contact

Write Us a Message

Created with Mobirise bootstrap theme