Information Technology
and Human Factors

Continuum Computing Trustworthiness Research Group

Continuum Computing Trustworthiness Research Group

The Continuum Computing Trustworthiness Research Group is working on (1) the development of guidelines for the quality of AI systems and their social implementation; (2) research on the quality evaluation and management for AI systems; (3) research on the software technology to evaluate and improve the trustworthiness of real-world systems with uncertainty; and (4) standardization and social implementations related to digital architecture.

News

Tags : All
Date order : New / Old
There is no notice at the moment

Research Topics

✪  The development of guidelines for the quality of AI systems and their social implementation

We develop the "Machine Learning Quality Management Guideline" to establish quality goals and development processes for products and services using machine learning.

✪  Research on the quality evaluation and management for AI systems

We develop methods to improve and evaluate the implementations of machine learning algorithms, models, and systems from a software engineering perspective.

  • Machine Learning Quality Management Project (NEDO funded project)

    • Open testbed toolset Qunomon for the quality management of AI systems
  • JST FOREST project
  • JSPS project

✪  Formal methods for software systems with uncertainty

Our third focus is to evaluate and certify the trustworthiness of software systems with uncertainty, such as cyber-physical systems. We develop formal methods for modeling and verifying software systems that deal with probabilistic events, physical environments, and so on. We also conduct foundational research on programming languages and interactive theorem provers.

✪  Standardization and social implementations related to digital architecture

  • Standardization activities and related research
  • Social implementation through contributions to the open-source software community

Group Members

Projects

to TOP