Write a Blog >>

Michael Whalen

Michael+Whalen Requirements, Models, and Automated Reasoning in the Cloud. The RE and formal methods research community has touted formal languages and tools to help improve requirements, system modeling, and checking implementation conformance. While there have been successful applications of formal methods in hardware design and aerospace, despite 50 years of research and development, we have not seen wide adoption of formal techniques for large and complex systems such as web services, industrial automation, or enterprise support software. Two key difficulties for using automated reasoning for these systems continue to involve eliciting and explaining requirements with customers and creating accurate system architectural models necessary to support analysis. With the cloud, much of this has changed. For requirements, we can work backwards from common pain points for a large community of users, propose solutions, and quickly iterate to better address user needs. In addition, descriptions of cloud services provide accurate models in the form of computer-readable contracts. These contracts establish and govern how the system behaves and in many cases they are amenable to formal analysis at scale. Most importantly, since those models are used by a large user community, it is now economically feasible to build the tools needed to verify those models. In this talk, we discuss the trend of constructing practical and scalable cloud-based formal methods at Amazon Web Services and how they can easily be used by customers – sometimes with just one-click.

Dr. Michael Whalen is a Principal Applied Scientist at Amazon Web Services and the former Director of the University of Minnesota Software Engineering Center. Dr. Whalen is interested in formal analysis, language translation, testing, and requirements engineering. He has led development of simulation, translation, testing, and formal analysis tools for C, Rust, and Model-Based Development languages including Simulink, Stateflow, SCADE, and RSML-e, and has published 85 peer-reviewed articles on these topics. Dr. Whalen has led successful formal verification projects on foundational Amazon C libraries and large industrial avionics models, including secure autonomous vehicles (DARPA HACMS project), pilots’ displays (Rockwell-Collins ADGS-2100 Window Manager), redundancy management and control allocation (AFRL CerTA FCS program) and autoland (AFRL CerTA CPD program). He is currently working on formal verification at “cloud scale”, looking at how to create proof engines that can cost-effectively scale to larger and more complex problems than are handled by current tools. He is also involved with outreach, helping developers and business customers apply verification tools to improve their team’s quality, velocity, and innovation.

Alexandros Mouzakitis

Dario Gil Agile Delivery of Requirements Engineering at Jaguar Land Rover. Software is shaping and influencing our world and it is unimaginable to realise day-to-day life without it. Since the introduction of the first Electronic Control Unit (ECU) in the 1970s, the automotive industry has seen a substantial increase of both offboard and onboard software. The use of software in the automotive industry has led to a significant increase in the number of requirements due to the complexity of new interconnected vehicle systems, features, and functions. As a result, requirements engineering plays a key role in the definition, development, and release of a high-quality product. This talk aims to provide an overview of how Jaguar Land Rover delivers requirements through model-based systems engineering operationalised by agile best practices.

Dr. Alex Mouzakitis is the Chief Technical Specialist for Systems Engineering at Jaguar Land Rover. Dr. Mouzakitis has over 20 years of technological and leadership experience, especially in the area of automotive systems. In his current role is responsible for leading the End2End system architecture and tooling of the Digital Delivery Platform for system, software, test and release across all programmes and vehicle lines. In addition to that Dr. Mouzakitis is responsible for the deployment of Agile Model-Based Systems Engineering and standardisation to promote re-use and reduction of system complexity through global configuration. In his previous positions within JLR, Dr. Mouzakitis served as the Head of Vehicle Engineering, Infotainment and Connectivity Research Department, Head of the Electrical, Electronics and Software Engineering Research Department, and prior to that as the Head of Model-based Product Engineering Department. Dr. Mouzakitis is a Chartered Engineer and a Fellow of the IET and InstMC engineering institutions. He has published over 130 scientific papers in international journals, book chapters, and international conferences. Dr. Mouzakitis holds a BSc (Hons) in Integrated Manufacturing Technology and a PhD in Machine Learning and Artificial Intelligence for Autonomous Vehicles from the University of Wales, an MSc in Systems and Control from Coventry University, and an EngD in Automotive Embedded Software Development from The University of Warwick. Dr. Mouzakitis holds an Industrial Fellowship with WMG at The University of Warwick.

Fosca Giannotti

Fosca Giannotti Explainable Machine Learning for Trustworthy AI. Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explainable AI addresses such challenges and for years different AI communities have studied such topic, leading to different definitions, evaluation protocols, motivations, and results. This lecture provides a reasoned introduction to the work of Explainable AI (XAI) to date, and surveys the literature with a focus on machine learning and symbolic AI related approaches. We motivate the needs of XAI in real-world and large-scale application, while presenting state-of-the-art techniques and best practices, as well as discussing the many open challenges.

Fosca Giannotti is professor of Computer Science at Scuola Normale Superiore, Pisa and associate at the Information Science and Technology Institute “A. Faedo” of the National Research Council, Pisa, Italy. She has been recently awarded the prestigious European Resarch Council Advanced Grant “XAI – Science and Technology for the Explanation of AI Decision Making”. She is among core PI scientists of the two AI network of Excellence: TAILOR and HumaneAI_Net. Professor Giannotti is a pioneering scientist in mobility data mining, social network analysis and privacy-preserving data mining and responsible AI. She co-leads the Pisa KDD Lab - Knowledge Discovery and Data Mining Laboratory, a joint research initiative of the University of Pisa and ISTI-CNR, founded in 1994 as one of the earliest research lab on data mining. She is a recognized international authority in social mining from Big Data: smart cities, human dynamics, social, trustworthy AI. She is author of more than 300 papers. She has coordinated tens of European projects and industrial collaborations. Professor Giannotti is deputy-director of SoBigData, the European research infrastructure on Big Data Analytics and Social Mining, an ecosystem of tens cutting edge European research centres providing an open platform for interdisciplinary data science and data-driven innovation. On March 8, 2019 she has been features as one of the 19 Inspiring women in AI, BigData, Data Science, Machine Learning by KDnuggets.com, the leading site on AI, Data Mining and Machine Learning (https://www.kdnuggets.com/2019/03/women-ai-big-data-science-machine-learning.html). Since February 2020 F.G. is the Italian Delegate of Cluster4 (Digital, Industry and Space) in Horizon Europe.