Intelligent Platforms
Research Institute

Information Technology
and Human Factors

Continuum Computing Trustworthiness Research Group

Continuum Computing Trustworthiness Research Group

The Continuum Computing Trustworthiness Research Group is working on (1) the development of guidelines for the quality of AI systems and their social implementation; (2) research on the quality evaluation and management for AI systems; (3) research on the software technology to evaluate and improve the trustworthiness of real-world systems with uncertainty; and (4) standardization and social implementations related to digital architecture.

News

Tags : All
Date order : New / Old
There is no notice at the moment

Research Topics

✪  Trust management of continuum networks

Our first focus is to improve the trustworthiness of continuum networks. We develop methods for the trust management of networks, especially on network security management and operation.

✪ Quality management of machine learning systems

Our second focus is on the quality of machine learning systems. We develop methods to improve and evaluate the implementations of machine learning algorithms, models, and systems from a software engineering perspective. We also develop the "Machine Learning Quality Management Guideline" to establish quality goals and development processes for products and services using machine learning.

✪  Formal methods for software systems with uncertainty

Our third focus is to evaluate and certify the trustworthiness of software systems with uncertainty, such as cyber-physical systems. We develop formal methods for modeling and verifying software systems that deal with probabilistic events, physical environments, and so on. We also conduct foundational research on programming languages and interactive theorem provers.

Group Members

Projects

to TOP