The doors open at 08:30. The programme of the Big-Data.AI Summit starts with a two-hour block at the mainstage of hub.berlin. In the afternoon, we will have presentations, podium discussions and workshops on five parallel stages.
10 Apr 2019
10:30 — 10:50
Black Arena | Keynote
10:50 — 11:30
Black Arena | Panel
11:30 — 11:50
AI | EN
12:10 — 12:30
12:30 — 12:50
John McCarthy Stage
Digital Business Strategy
Joan Clarke Stage
Grace Hopper Stage
Retail and Logistics
Hari Seldon Stage
Big Data | EN
12:30 — 13:50
Workshop Stage | Workshop
AI | DE
12:50 — 13:10
13:10 — 13:30
Big Data | EN | DE
13:30 — 13:50
Big Data | DE
13:50 — 14:10
14:10 — 14:30
14:10 — 15:30
14:30 — 14:50
Ethics and Society
14:50 — 15:10
14:50 — 15:20
15:10 — 15:30
15:20 — 16:40
15:30 — 15:50
15:50 — 16:10
15:50 — 17:10
16:10 — 16:30
16:30 — 16:50
16:30 — 17:00
16:50 — 17:10
16:50 — 17:30
17:10 — 17:30
At idealo.de, we have a lot of repetitive work that we can automate with machine learning. In this talk, I will present some of the problems that we face and also how we solved it. In particular, I will share how we taught a computer to understand aesthetics of hotel photos to rank millions of images. Further use cases include low-to-high resolution, categorizing hotel images and also automating the image gallery for our product catalog. I will also show which technologies we used to solve the problem. I will conclude my talk with some further projects that we’re going to work on and also a summary of what went well and what not.
Data is at the centre of the value-creation process. Companies need an elaborate data culture and cutting-edge data management that ensures both high data quality and adherence to data protection and data security regulations. Digitalisation as well as data & analytics have become key areas of activity for companies to remain competitive in the global market and meet the requirements of their clients. Companies can only succeed if they recognise their own data as assets, properly prioritise data quality and data management and 'refine' their own data with data from external sources. This presentation will focus in particular on data-based business models and how data economy can be achieved through data sharing. Sharing and enriching data is a new and increasingly important aspect of data economy. A company's ability to integrate external data into its own analyses, to share data with other companies and to monetarise enriched data will be decisive for its future competitiveness. In order to translate own data into meaningful and profitable business models, it is necessary to develop a data culture, to increase data quality and to look beyond the horizon of the company's own data. Data sharing can occur in various configurations. The simplest form of data sharing occurs within the company itself in order to overcome data silos. But data sharing is also possible between companies based on various technical solutions. The last part of the presentation will consist of practical examples addressing models currently used and how data sharing could look like in the future.
What makes a successful data driven company is to step from the lab into the factory. Only if the scaling of data use cases into AI products is accomplished, a company can claim to have successfully implemented AI and machine learning into the organization. As part of the digitization strategy, a company-wide global data strategy is the first step in this direction. Many large companies in Germany and Europe are now starting their data transformation in order not to lose the edge to companies in the USA and China. • Learn how Volkswagen started their Data Journey by defining their global Data Strategy in alignment with their digitization efforts. • Learn how Porsche and VWFS set up their teams and deliver data use cases from idea to factory leveraging DevOps • Benefit from over 50 years combined experience of data & ai projects
The application of Smart Data Analytics to about nine years of daily order entries will be explained. The goal of an improved prediction of the resource needs in the logistics operations (e.g. staff) will be shown and the analytics used will be presented as well as the business context. In addition the Smart Data Solution Center BW and how it supports industrial users new to the Data Analytics will be introduced.
A recent study of IBM shows that there is a demand of more than 2.7m talents working in data analytics in the US alone. Companies facing the challenge to recruit the best talent from the market, re-skill their own employees and look out for future talents within their own rows. StackFuel is a data science training provider and addresses this fast-paced and evolving market. Having interviewed more than 100 key decisions makers in the industry in 2018, we recognize problem patterns and best practices across industries regarding big data and AI strategies throughout the organization. In the talk, we will go into depth about the different viewpoints of stakeholders! Key big data and AI decision makers from the industry partner will participate and showcase lessons learned in the talk.
The identification of appropriate solutions in a world of products with increasing complexity, fuzzy customer requirements and fast-moving competitors is a really challenging task. A new approach is needed to provide an intuitive access to complex products and services for customers as well as for sales representatives. Knowledge-based AI meets these requirements by allowing a flexible mapping of product, marketing and sales knowledge to the formal representation of a knowledge base. The system can assist customers, product managers and sales representatives by guaranteeing the consistency and appropriateness of proposed solutions, identifying additional selling opportunities and by providing intelligent explanations for identified results. Using examples from the manufacturing industries we show how constraint satisfaction, model-based diagnosis, personalization and intuitive knowledge acquisition techniques support the effective implementation of customer-oriented sales dialogs.
REWE Group is one of the largest retail companies in Germany. REWE Digital GmbH has been responsible for the development of the food online business of the REWE Group for several years and has been successful in doing so. REWE is currently the largest online food supplier in Germany. Decisive for the success of the business model is the satisfaction of the customer with the online offer. In addition to many other quality features, high availability of the products and flexible selection of delivery windows are particularly important for the customer. REWE uses data-driven solutions to maximize availability while optimizing supply chain efficiency. In this talk, we present the Capacity Utilization Forecast, which predicts the anticipated utilization of outbound deliveries dynamically and in near real time. We furthermore give insights in developing data science use-cases in an agile development process.
Setting up a data science pipeline in a financial institute can be really challenging. Lots of contrary requirements regarding security, stability and agility have to be fulfilled. Olaf Hein describes a proven architecture of such a pipeline, based on Hadoop and web-based notebooks.
In the domain of enterprise applications, organizations usually implement application performance monitoring (APM) software which creates a lot of structured and unstructured data from various system instances. In order to take advantage of this massive data collection, the research project between Fujitsu and the Otto-von-Guericke-University Magdeburg investigates the comparability and applicability of APM data to serve as an input for a domain-specific performance knowledge base. The research artefact is aimed at supporting decisions of capacity management and performance engineering activities using Advanced Analytics techniques such as optimization algorithms and prediction models.
The STAR IT-platform was implemented by the European Central Bank (ECB) in 2017/18 to facilitate supervisory exercises such as the EU wide banking stress-test. It leverages a wide range of technologies, many of them from the big data domain. Examples are MongoDB and Oracle Exadata. The solution automates previously manual processes and provides ECB supervisors with the necessary means to collect large volumes of data from supervised banks and subsequently analyse it in a timely manner. A single submission from a bank can contain up to 1 million data points and require several hundred thousand of data quality checks to be executed. The platform also allows banks to perform pre-validation prior to submission. In addition the platform enables supervisors to run sophisticated statistical models to challenge submissions from banks and subsequently analyse the results in a collaborative manner. The platform was successfully used for the first time in the 2018 EU wide
How are automation and AI making retailers more competitive? AI and ML enhanced retail solutions enable real-time decision-making for more autonomous, profitable business decisions across retail. A recent JDA Software & Microsoft survey found that 53% of retailers are investing in AI and ML over the next 18 months. Retailers expect to deploy AI solutions that can analyze data, draw out valuable insights and to make decisions to automate core processes like pricing and replenishment, improving product availability, profitability and the customer experience. Learn how to unleash the full business impact of intelligent data and systems.
Over the last 3 years, PAYBACK has gained significant experience in deploying big data for direct marketing activities and has successfully implemented several near real-time and real-time direct marketing use cases in collaboration with partners. To optimize marketing outcomes and boost response rates, PAYBACK has transformed its direct marketing activities, shifting its focus from pre-planned campaigns to creating individual shopping journeys, where shoppers receive the most relevant offers at the most opportune times. To make this possible, PAYBACK has built a tech stack that enables the tracking and monitoring of purchases, location-specific events and other actions performed by customers in the PAYBACK mobile APP.
We a’re living in a world of relations. Every click on a link, every article, image or video you have seen, every like you have given – those all make relations between you and some content. This fact is not applicable only to social media. In the land of banking transactions, you are making relations by sending and receiving money. Real, hard-earned money, therefore, every relation is even stronger! Analysing transaction footprints is another key to your client's mind. Does he spend a lot on betting? Probably not the person you'd like to lend money. Is he sending some money to charity? Bad time to offer a loan, try some investment products instead. In this talk, we will describe the process of transaction classification used in one of our products, Instalment detector, developed for one of the major Czech banks.
This contribution addresses the lack of a common knowledge base in analytical projects. This results in communication problems and a lack of structure in often complex analytical projects, which threaten the success of the projects. In analytical projects, data products are developed that are taken into account in data-driven decisions. This article presents the Data Product Profile. This structures communication, ensures a common interdisciplinary knowledge base and thus facilitates successfully data-driven decisions in companies.
The fileee C2B platform is the first and only platform that is developed from the customer's point of view. It allows companies to send documents to their customers digitally and in a legally compliant manner. Integration of AI allows more complex use cases such as application and service processes or mobile customer portals, with automated document recognition and data classification for uploaded documents, chats or emails. In addition a customizable building set is available with 16 technologies such as mobile Capture, AI-based data extraction, Chat and Self service or also the integration of the electronic signature. It uses modern interfaces and a BPM platform that can be used for orchestration.
The capabilities of AI driven systems are usually compared to human capabilities, whether it is in the context of autonomous driving, conversation (just look at the Touring test) or other applications. But should we stop there, just aiming to make AI as good as humans? Why not try to be build a super-human AI? In the context of some interesting ideas from books of Max Barry, we will exploit interesting research projects and their current state, including practical expamples. One topic will be project “Debater” that addresses the question “Can artificial intelligence expand a human mind?”. The other one is about how to bring trust to AI, to achieve acceptance and foster it’s adoption. We will have a look at concepts and also practical solutions existing already today, addressing fairness, bias and explainability. We will have a look at the limits of AI and where we are today on this journey in the context of research projects and new solutions resulting from those research projects.
AI is everywhere so there is a need for Enterprise AI. This talk sheds light on the necessity of an AI Strategy and an operating model, what differentiates PoCs and Pilots from real AI projects running in production and how to integrate them into an overall AI Strategy. The authors want to show a best best practice approach, based on real life projects, what AI blueprints can do and why it is necessary to think about industrializing AI projects with digital factory approaches and how to possibly address AI workforce challenges.
When it comes to AI, most enterprise executives think of chatbots. How naive. Because the actual boost of productivity takes place in the back office: during the handling of processes. Andreas Klug, President of the working group "Artificial Intelligence" in Bitkom, shows from best practice how AI supports case processing – even when it is about complicated case types.
Critics consider Kubrick’s “2001-A Space Odyssey” from 1968 as the most influential science fiction movie ever. Looking back at its release, we can learn some surprising lessons esp. from reflecting on some details that characterize the eminent artificial intelligence, called HAL 9000. What role will the personality of a self-conscious chatbot play in the future? What kind of dialogue should he head for with humans? How much control can we hand over to systems of automated decision-making? We still lack many of what Kubrick has envisioned 50 years ago. But we are already in need to answer some of the questions, he had asked.
The given presentation will share insights of how AI will change the way we are doing healthcare today. Therefore, different perspectives, e.g. medical doctors, citizens, patients, and industry will be compared. Furthermore, latest trends and real-world examples will demonstrate the potentials of AI in medicine. Join the interactive presentation and contribute also your opinion about AI in medicine.
The fictitious company FastChangeCo has developed a possibility not only to manufacture Smart Devices, but also to extend the Smart Devices as wearables in the form of bio-sensors to clothing and living beings. With each of these devices, a large amount of (sensitive) data is generated. Based on its data, FastChangeCo aims to make forward-looking decisions and develop innovative services. On the one hand, this is intended to encourage customers to make targeted purchases, but also to significantly improve the quality of existing products and develop future products. In order to combine the data into a logical overall view, FastChangeCo has committed itself to a consistent implementation of the information landscape with methods of data modeling. In this presentation, the speakers will show how FastChangeCo has achieved its goals by rapidly building the required computing capacity in its Hybrid Cloud Data Warehouse architecture using existing enterprise technologies.
When people discuss the future of artificial intelligence, a commonly voiced concern is the emergence of an adversarial superintelligence that might spell the end of humankind as we know it. Indeed, we should already think about precautions in order to coexist with AI safely once it will have reached human-level intelligence, and beyond. However, we might have to deal with a different dystopia much earlier than a rogue AI turning humankind into a large puddle of computronium: a swarm of mindless machines will control social status and well-being of billions of individuals by deciding who will find work, a mate, or a friend, or who will receive a loan, or who will go to prison. Consequently, as artificial intelligence becomes more and more a part of everyday life, the safety, benefits, and risks of the technology need to become part of the everyday political and public discourse just as naturally as, for example, road traffic safety.
Inspirient is one of the largest professional service providers in the world helping a range of clients, from start-ups to Fortune 500 companies, to succeed at each step of their digital transformation. However, the reason [Company X] is in the best position to advise their clients on such topics, is that they practice what they preach! This talk presents a collaboration between [Company X] and Inspirient, a Berlin-based start-up building Artificial Intelligence (AI) solutions, to automate a complex process at the core of the professional services that affects nearly all their employees – the travel expense appraisal process. Insight into the challenges and opportunities are shared which will be invaluable to any person or company considering automating their processes with Cognitive technology.
While the industry is certainly aiming in this direction, AI in healthcare will not be able to build an autonomous “Dr. AI” in the near-term. Still, this vision is causing anxiety and ethical questions amongst practitioners and patients alike. CellmatiQ embraces a strategy, in which we focus on development of AI-based automation for narrow and targeted tasks in the domain of medical image diagnostics. Combined with a data management strategy that relies on “gold-standard” training data, this allows to automate human tasks that are tedious, repetitive and/or error prone with immediate high benefit for doctors, while avoiding the perceived threat of being replaced. Equally, ethical questions are much easier to solve when focusing on steps of a workflow instead of an entire medical domain. We will present both, our data management platform and acquisition strategy, as well as the first products that provide such automation in diverse fields like orthodontics or ophthalmology.
Data is said to be at the heart of digital transformation. Still, many organizations fail to create a movement beyond their data science labs and competence centers. Continental successfully empowered hundreds of non-IT users from business functions (controlling, logistics, HR, etc.) to boost their effectiveness in working with data. Not as dashboard consumers, but as citizen data analysts developing visual data workflows in the open source KNIME Analytics Platform. The benefits are not only new insights gained, but also automation of repetitive, data intense tasks. The term "Big Data" from a business user perspective is scoped surprisingly small. It often means "bigger than Excel" but leads to better decision making on all levels of the organization.
In addition to new regulatory requirements in terms of legislation and norms and standards, companies are increasingly faced with the challenge to integrate ethical issues into their innovation processes. Here, methods from design research such as speculative or participative design offer suitable tools. Using examples from current Fraunhofer CeRRI projects with partners from industry, we demonstrate how design methods and new formats can be used for a dialogical, responsible and ethical design of technology. This way, the responsible design of artificial intelligence can provide a competitive advantage in a digital economy in which trust is key to successful innovation.
Biomedical Science and computer technology come together. HPE and DZNE are applying technology to significantly accelerate research into Alzheimer's. By using advanced computing technology and new computational algorithms we significantly cut-down the time required for genome analysis. This benefits real time analyses of human genomes for research and personalized diagnostics.
In this talk we discuss the practical challenges we faced in the field and what we learned when we used data science in the Fujitsu factory in Augsburg in order to improve run time and quality. We present our lessons learnt and the derived conceptual frameworks and prodedural models.
Artificial Intelligence is much more than industry 4.0. Algorithms are part of our everyday live and deeply impact society. They can help us make more consistent and fairer decisions. But they also can control us, increase discrimination and lead to more social inequality. We need a new debate on the relationship of man and machines. This talk aims to initiate that discussion.
Data Science and AI are omnipresent in media. Data Science and AI has a reputation for enabling new business models and promoting a better customer approach. In practice, many companies have gained initial experience with data science units and the application of AI. However, a linear model for business management problems is often better suited than complex self-learning algorithms. GDPR also presents companies with further challenges in the application of analytical models. The session will focus on existing Data Science Cases and will address the challenges.
The landscape of cloud-native technologies constantly grows leading to great opportunities when it comes to the development of big data applications in the cloud. In order to leverage the benefits of these leading edge technologies, complexity arising from the heterogeneity of cloud infrastructure and data services has to be managed. Ease-of-use and accessibility is as important as control and transparency to ensure security and compliance. This talk illustrates characteristics of different cloud platforms like Kubernetes, OpenShift or Cloud Foundry and shows a way to reduce complexity by managing challenges like identity management, governance and cost across the big data pipeline.
Issue: Data science projects fail due to incompatible expectations of the team and stakeholders. Actions points: 1. Identify roles andskills in the team 2. Expectation Management. Data science is not a magic box 3. Data Strategy. Management and development team need to have an active role during the project and this needs to be created based on customers status and needs 4. Identify use cases. Data science can answer questions, but what are the questions we need to answer during the the project? Practical implementation: 1. Create the right environment: Data Lakes as foundation for data science projects 2. Create the right processes: a.Data Strategy: understand where the customer is at the moment and what journey he would like to start b. Data Science Lifecycle: Exploration ? Feature extraction ? Training Model ? Evaluation /Prediction c. Production: Testing, Monitoring, Logging, Automation 3. Tools/Technologies: Python, Jupyter,Spark/Hadoop, Hive, Kafka ? over the cloud S3, Kinesis and Athena,Kubernetes-Lessons
Facing up to the challenge of digitization in aviation, a micro-enterprise, a SME and a university evolved an innovative decision support system. The startup DATA|bility developed data-based diagnosis and prognosis analyses using machine learning in combination with conventional engineering. In cooperation with justaero, a predictive maintenance application for aircraft engines based on a cloud platform has been implemented. The application's outcome contains the prediction of future health and an indication of the forecast uncertainty. These results are now combined with the process management required for maintenance. Service processes can thus be controlled automatically, processed and run through in a traceable manner. With this, need for actions can be identified, recommendations can be given at an early stage and maintenance processes can be optimized. The presentation discusses technical and implementation-related aspects of an AI framework in industrial environment. Finally, operational cost benefits are lighted.
This session will cover an active discussion on applied Ethics in AI. ? I will point out some insight how this technology might change our life to the better or the worth. ? Also hear about the experience from good, bad, to ugly. ? Algorithms tend to learn from Data and real-world interaction with machines. o This mimics human behavior with all the good and bad decisions being made. o What are the precautions to build into these systems to avoid bias o So how can we encourage our data assets and algorithm to aim for an inclusive design? ? Base on some real samples grabbed for experiences from LinkedIn and Bing search, the awareness of bias in data and algorithms will be raised. ? Steps to get a process in-place that helps to avoid common missteps. ? The road ahead
AI technology is increasingly establishing itself as an important pillar of digitization in enterprises and public administration. Often, AI is reduced to the training of algorithms, but for enterprise use, the entire process of developing an AI application must be supported. This includes data provisioning, data preparation, training and inference. It is recommended to use a flexible and agile Enterprise AI platform (container-based, K8S), which can be used in your own data center, as a hybrid solution or only in the cloud. This session shows how to plan and build a platform for AI workloads.
Today’s companies mainly use AI to increase efficiency and to reduce costs. Using AI simply as a cost-cutting tool will eventually fail to capture the full value it offers. Therefore, we present a framework and approach to create entirely new business models and services to capture the value of AI technologies. Based on the systematic and, in practice, proven methodology of the St.Gallen Business Model Navigator, we have linked new, emerging AI business model patterns with the need of companies to exploit existing activities and explore new business opportunities. This approach facilitates in particular (1) the creation of additional value for customers through AI, (2) the structured exploration of business opportunities through use cases, (3) the evaluation and rapid testing of these opportunities, and (4) the capture of value for one's own company through appropriate revenue models. The results of one of our workshops in February will be presented and further discussed.
Soil movements, e.g. caused by natural or anthropogenic processes, can be a threat to the population and infrastructure. Therefore, movement processes have been monitored for a long time. Satellite-based methods now make it possible to detect and monitor ground movements with high precision (radar remote sensing). However, these possibilities have so far only been used to a limited extent for everyday applications.
By using modern GIS technology for handling these BigData datasets (approx. 50 million measuring points for Germany), the BGR (Federal Institute for Geosciences and Natural Resources) is currently developing the German Soil Movement Service (BBD). For this service, Sentinel-1 data from the Copernicus program will be processed and the result presented using WebGIS technologies against the background of a map of Germany. This will provide a highly effective tool for planning and optimizing measures to detect and avert geological hazards by other authorities.
The article gives an overview of the challenges and technical solutions for processing Big Geospatial Data with GIS technologies and illustrates the added value for safety-relevant tasks using the BGR ground movement service as an example.
At ZF Big Data & AI moved from an innovation project to an Advanced Analytics Lab, from PoC to SoV (service of value), from first practices to a common learning landscape, from a chapter to a center + community hybrid. We learned amazing things along the way that we are happy to share.
ETL pipelines are a critical component of the data infrastructure of modern enterprises. As Big Data assumes an infinite shape, one needs to process and integrate much higher volume of data coming from more sources and at much greater speed than ever before, and traditional data warehouse and related ETL/DI processes are struggling to keep the pace in the big data integration context. Building your ETL data pipelines for big data processing using Apache Spark has become viable choice of many as it not only helps organisations to dramatically reduce costs but it will facilitate agile and iterative data discovery between legacy systems and big data sources. In this session, we present the feature-rich & flexible ADASTRA Framework for Big Data Integration based on Apache Spark that enables you to build robust, scalable and reliable data pipelines for your Data Lakes and Big Data environments. We will also talk about the benefits of a Framework-based approach gained through valuable experience from successful customer projects.
At present and in foreseeable future, many critical tasks, such as disease diagnosis, will not be fully automated through artificial intelligence technologies despite their often above-human performance. There are two main reasons for this. First, artificial intelligence is widely considered a black box. Understanding the reasoning behind predictions is difficult for many algorithms and practically not (yet) possible for others. Humans tend to mistrust things they do not understand. Second, intelligent machines are developed to create value and make our lives better. In many cases, machine intelligence supporting humans turns out to be superior to pure automation. In fact, the combination of formalized information internalized by the model and tacit knowledge of the human worker often results in a more complete picture of the situation and leads to a better outcome. However, what does it take to make this human-machine marriage a happy one?
Voice is the most natural form of communication. Speech-based interaction is fast, intuitive and convenient. This is why comdirect is convinced that voice will play an increasingly important role as a customer access channel in banking as well. Using practical examples, the direct bank shows how and where it already uses voice and which use cases still exist.
Deutsche Bahn, the main German railway provider, is revolutionizing its train stations with IoT & data analytics. Beyond remote monitoring its assets such as elevators, station clocks and displays with smart sensors to ensure a high availability, a key requirement for intelligent operations is being aware of the passenger counts and flows in the stations. In collaboration with the open source community, DB Station&Service has thus developed the Pax-Counter, a smart sensor that counts the number of mobile devices in its vicinity by detecting their WiFi signals. Pax-Counter sensors have been used to count crowds at events in Finland, France as well as Switzerland, and have been installed at train stations in Berlin and Hamburg. Advanced analysis & modelling in R allows to determine passenger counts and flows from the number of devices registered. Dashboards and visualizations in Tableau enable data-driven operations for stations, e.g. for passenger guidance, security and cleaning.
In this talk I will share a story of Zubot, our company chatbot which is used for marketing purposes. The topics covered will include technical details (e.g. which tools for natural language processing we used), some best practices and pitfalls we have encountered during development and how we used Google analytics to create a data driven chatbot which helped shape our future company branding decisions.
During the last decade the number of passengers in Germany’s rail environment has been steadily growing, reaching 144 million passengers in 2017. The increasing demand on capacity requires new methods for the planning and efficient management of trains and infrastructure. This work presents an approach that aims at solving future dispatching decisions by Reinforcement Learning. The transfer of this approach from today’s computer games to real-world problems remains a core challenge that is tackled in this work for a suburban train network in two steps. First, an approach on anomaly detection is presented, which covers an initial analysis on present and upcoming anomalies. The second step introduces our current work on modelling a suburban train network as a network of related entities. Based on this approach different prediction models are used to achieve a data-driven perspective on the current status quo of the network, possible future states and actions that could lead to these states.
Artificial Intelligence (AI) employed in an economic context will most certainly never operate in isolation, but it will rather interact with people – and especially with its co-workers in the same organisation. To optimise economic value creation, future processes are likely to be structured in such a way that both sides, AI and humans, leverage their respective strengths while compensating for the other side’s weaknesses. Due to the systemic weaknesses of current approaches to AI (e.g., lack of intuition and common sense), new roles and jobs are likely to emerge at the workplace. In this talk, we extrapolate from current project experiences to discuss five potential new roles and supporting organisational structures that may emerge over the next ten years.
Ethics is the new blockchain. When it comes to AI, automated decision-making processes and digitisation in general, ethics committees, commissions and working groups abound. There's a lot of pressure on companies and the public sector to behave ethically. Still, most people don't have a clue what that means for the development and deployment of AI and automated systems. That's no surprise because many "experts" give fuzzy and even contradictory advice. In my talk, I will make outline the state of the debate, make sense of the the issue, provide best practice examples of how to deal with the challenges and provide some tangible advice of how to move forward.
Digitalization promises to improve plant operation in the petrochemical industry by increasing the insights into production processes and increasing the automation degree. A growing sensor-base, combined with analytics and decision-support tools should increase safety and efficiency of the plant operation. The talent shortage in the chemical industry is a limiting factor in tapping the potential of digitalization. In the worst case a shrinking workforce struggles with a growing number of overwhelmingly complex IT tools. A possible way forward, is to offer new types of interfaces that mitigate simplify data access and interpretation and prepare for a shrinking workforce, which cannot access the knowledge of retired experts. We introduce the Intelligent Knowledge Assistant (IKA), an industrial virtual assistant acting as an intelligent companion making operations easy even for the less skilled worker. IKA simplifies information access and interpretation to support expertise and skill development of the workforce.
“Einreichung vorbehaltlich Freigabe durch Daimler AG und T-Systems International GmbH“ In this session, we will sketch a new edge computing scheme for dealing with exabytes of measurement data collected in the most remote and inhospitable areas of the world. The proposed scheme builds on two pillars. First, it consists of a so-called “meta-cluster” architecture in which secure micro-data-centers are federated via an ubiquitous cloud backbone. Secondly, our scheme employs a dedicated and patent-pending signal processing software stack which is able to move the transformational specifications from the expert’s desk to the globally-distributed computation and storage devices as well as to re-integrate the measurement excerpts back for interactive discovery. The presented edge computing scheme for measurement data is successfully implemented by the ambitious Daimler project “Big Data for Endurance Testing”, a future-proof analysis and reporting platform for endurance testing with 23 sites around the world.
Vehicle sensors are able to see the environment but in some city scenarios the car has to look around corners or behind obstacles to plan trajectory collision free. This could be solved with Car2X connectivity. To predict if there is enough signal to communicate, we developed a prediction model from real world Car2X data. In this talk you will get insights about the flow from data collection in the cars to production ready deep learning prediction model as API. This project is funded by the BMVI mFUND OpenData program.
The session will cover solving a real-life problem of simplifying motor insurance claims, thus delivering customer experience, reducing processing time to settle claims, and reducing cost of claims. The solution leverages conversational AI integrating Google Dialogflow for interactions with the end consumer, and implements customized algorithms for vision capabilities to process the images of damaged cars to arrive at an estimate for faster settlement of claims. The AI solution involves the right algorithm + right training data + feedback to learn + enterprise change management, to embed AI interventions into existing processes. Enterprise readiness and change management are critical for success. The session will address technical aspects, as well as change management elements in the project, along with choices made to overcome the challenges. The solution delivers phenomenal customer experience by reducing the processing time from 7 days to 1 day, and reduces claims cost by 50%.
The ZF plant Saarbrücken, Germany, manufactures around 11,000 transmissions per day. With 17 basic transmission types in 700 variants, the plant manages a large number of variants. Every transmission consists of up to 600 parts. The plant Saarbrücken is a forerunner and lead plant in innovative Industry 4.0 technologies. Therefore, an AI project was started with the objective to get reliable and fast results on root cause discovery. Speed is important because production runs 24 hours / 7 days a week. The target is to reduce waste in certain manufacturing domains by 20%. The key success factor is the fast detection mechanism within the production chain delivered by AI. Complex root-cause findings can be reduced from several days to hours. A self-learning AI solution Predictive Intelligence from IS Predict was used to analyze complex data masses from production, assembly and quality and to find reliable data patterns, giving transparency on disturbing factors / factor combinations.
IAV Maskin enters completely new directions in anonymisation. Using artificial intelligence methods, key features such as line of sight and facial expressions are identified in the original data. During the next step, the originally recorded face is replaced by a synthetic face, which has the same direction and facial expressions as the original face. Now the new face cannot be recognized as the original one. The value of the data is completely preserved, like in the case with natural data. Without IAV Maskin, a dramatic loss of information and serious image recognition errors are inevitable. The development of camera-based systems for driver assistance or infrastructure observation is no longer possible without IAV Maskin - at least not without violating personal rights.
Ulrich Pöttgens, Director Digital at Commerzbank, and Alexander Siebert, CEO of Retresco, would like to speak about the advanced technology adaptations Commerzbank has embraced with the objective of managing the digital transformation enterprise-wide, with the help of Retresco. Commerzbank needed to find a solution that gave every employee easy and direct access to comprehensive regulatory knowledge. Streamlining processes through implementing a chatbot platform resulted in a more-even spreading of knowledge within the entire organisation. Employee engagement has been improved, leading to efficiency. Having implemented a system that learns from and reacts to specific feedback from staff, Commerzbank is now able to offer their employees a significantly faster and solution-oriented support. To manage this supportive platform further competences were required, creating new professions.
Using Transport for London’s open data platform, four PhD scientists set out to address a key challenge facing cities today: how to tackle traffic congestion, maximise public transportation efficiency and minimise air pollution. Our team chose to focus on the open data records of approximately 27 million bicycle journeys made over a four-year period to try and solve a part of this dynamic problem. Over a five-week period, our team of data scientists used Python and an SQL database to perform initial queries and explorations of the data. The findings were astounding and led to the development of a predictive flow model using ANNs with Google’s TensorFlow Machine Learning library. By using the data of each bike journey, the team effectively came up with cost reduction strategies for the bike-sharing platform. We will share their process and discuss the practical implications of such an undertaking for bike-sharing platforms across all cities.