First and Former Director General, Israel National Cyber Directorate; Co-Head of Israel's AI Initiative; Head of Security Studies Program, School of Political Sciences Government and International Affairs, Tel Aviv University
Head of UK Office for Artificial Intelligence
Ng Chee Khern
Permanent Secretary, Smart Nation and Digital Government, Prime Minister’s Office, Singapore
President, Israel Internet Association (ISOC-IL); Associate Professor of Information Science, Lauder School of Government and Ofer School of Communications, IDC, Israel; Affiliated Associate Professor, Information School, University of Washington (UW)
Since 2017, over 30 countries have published national strategies or national plans in the field of artificial intelligence, followed by billions of dollar as investments.
Artificial Intelligence technologies have already tremendous economic potential in the private and business sector. Gartner and McKinsey estimate the value of the global AI market in 2019 at USD 1.9 trillion and forecast 3.9 trillion USD for 2022. The market value in the next five years is estimated to increase by 600-800 billion USD annually. There are reasons to believe it will be even more so in the post-Corona era.
Two technological revolutions – the information and communication revolution and the cyber one - have shaped the development of modern economy and made the high-tech industry a main growth engine. Now it's AI's turn. Those who implement AI technology will have significant advantage in post-Corona era.
Understanding the Convergence of Deep Reinforcement Learning
In the last five years, Reinforcement Learning (RL) had some amazing successes in board games such as Go,
and in computer games such as Atari and Starcraft. However, we currently do not see many real world applications for RL.
We consider the practical potential of deep RL and the issues that have to be addressed in order to make deep RL
a common methodology. In this talk I will focus on learning on the edge, and point to some
of the issues that need to be resolved, such as fast convergence, confidence in the models, and robustness to changing environments
and the behaviour of other agents. I will explain what is known in terms of convergence speed, and what is needed in terms
of abstraction and pre-training.
Factor Graph Attention and its Application to Visual Dialog and Visual Navigation
With the scale of data and computation power, machine learning algorithms are able to perform cognitive-like tasks such as visual question answering or visual navigation. In this talk we present Factor Graph Attention models that learn high-order correlations between various data modalities. These models can be seamlessly applied to any data utilities and differentiates useful signals from distracting ones. These models won the Visual Dialog Challenge 2020 and currently outperform the state-of-the-art in target driven visual navigation on the AI2-THOR simulator.
Computational Separation Between Convolutional and Fully-Connected Networks
Why do Convolutional Neural Networks (CNNs) perform so much better than Fully Connected Networks (FCNs) in some domains? One common explanation is sample complexity --- CNNs use less parameters, hence they are potentially more sample efficient. However, this explanation does not hold water, as over-parameterized neural networks often outperform economic models. This talk will offer another explanation --- CNNs are more friendly to Gradient-based training. Following this, we discuss a general approach for analyzing the interplay between training complexity, network architecture, and data-distribution for Gradient-based learning.
Diamond Keynote: Transforming Critical Work @ Intel with AI
For the last 10 years, the 200 Data Scientists and Machine Learning Engineers from Intel’s Artificial Intelligence Group have created hundreds of AI solutions to transform Intel’s critical work: Including product development and features, manufacturing, validation, supply chain, and sales.
In this talk, Itay Yogev, Intel IT Artificial Intelligence Group General Manager, will describe the journey and the key learnings of what it takes to create a breakthrough AI technology that brings high business impact.
Keynote: The Challenges of AI in Defense
In this talk I will discuss the challenges of practicing AI in a large defense company. I will discuss different methods of setting up a team of AI experts that specialize in a large number of different application areas. I will give several examples of how we use deep learning to enhance our defense applications, particularly in the fields of RF and communication signals.
I will conclude by discussing the challenges of applying AI to critical defense applications, and how to overcome them.
Keynote: End-to-end generic segmentation
Many real-world applications suffer from lack of ground-truth. we propose innovative end to end segmentation network, dealing with zero-shot or few-shot segmentation. We will show an innovative flow and visual intuition that makes triplet-loss post processing redundant and enables end-to-end networks for many applications. In addition, our network has the advantage of dealing with noisy labeling, by letting the network optimize accuracy without compromising consistency.
Surround Depth Perception for Autonomous Vehicles without the Lasers
Image based scene understanding in autonomous driving applications is typically performed directly in image space, making both the detection of general objects and their accurate 3-d localization challenging. Recent developments in the field of deep learning based stereo reconstruction offer an alternate path to tackling these challenges. I will present a surround multi camera stereo system, recently developed at Mobileye, which was designed to offer a new sensing mode of the vehicle's surrounding. This mode is algorithmically independent from our classic vision pipeline and sensor-independent from our LiDAR based detection. I will mention some design considerations regarding our datasets, losses, and architecture, particularly with regards to performing depth reconstruction using multiple overlapping fields-of-view with an arbitrary camera setup.
Artificial Intelligence in Finances: Scope, Examples, and Impact
AI enables principled representation of knowledge, complex strategy optimization, learning from data, and support to human decision making. I will present examples and discuss the scope of AI in our research in the finance domain.
Inventing on behalf of Customers
We will illustrate the focus on customer, and the value of inventing on behalf of customers through 2 examples: the Just Walk Out technology behind Amazon Go, a store where you shop and never have to wait in line, and a peek under the hood of Amazon One, a fast, convenient, contactless way for people to use their palm to make everyday activities like paying at a store, presenting a loyalty card, entering a location like a stadium, or badging into work more effortless.
Privacy Preserving Machine Learning
The growing adaptation of machine learning in many areas especially healthcare, have led the adoption into production systems. The availability of large datasets may contain private and sensitive information. This poses serious privacy and security concerns, there are challenges when machine learning algorithm needs to access private data. In order to address these issues, several privacy-preserving deep learning techniques, including Secure Multi-Party Computation (SMPC) and Homomorphic Encryption (HE) have been adopted. There are also several methods to modify the model, so that it can be used in privacy-preserving environment. However, there is trade-off between privacy and performance among various techniques. In addition private data can be misused or leaked through various cyber threats, like when attacker could speculate properties of the data used for training, or find the underlying model architecture and parameters. In this session we will review the privacy concerns brought by machine learning, and the mitigating techniques introduced to tackle these issues.
The State of Computer Vision: Achievements and Limitations
The talk will discuss recent trends in computer vision, with a personal view of their successes, limitations and future directions.
Better Disentanglement for Image Translation
Learning to separate data into its meaningful attributes is a core task in machine learning. The performance of current methods leaves much to be desired, mainly due to their reliance on generative adversarial networks (GANs). In this work, we consider the setting in which some attributes are observed, and our task is to infer the unobserved attributes. We present a new non-adversarial approach - LORD - that is highly effective for this task. Strikingly, we discover that latent optimization provides an extremely potent inductive bias, and additionally we show that GANs are not necessary for disentanglement but rather they contribute towards generating sharp images. Finally, we demonstrate that our method outperforms the top unsupervised image translation methods.
Towards a Theory of Deep Learning
Very recently, square loss has been observed to perform well in classification tasks with deep networks. However, a theoretical justification is lacking, unlike the cross-entropy case for which an asymptotic analysis is available. I will sketch several observations on the dynamics of gradient flow under the square loss in ReLU networks. In particular, I will show how convergence to a local minimum norm solution is expected when normalization techniques such as Batch Normalization (BN) or Weight Normalization (WN) are used. The main property of the minimizer that bounds its expected error is its norm: among all the interpolating solutions, the ones associated with smaller Frobenius norms of the weight matrices have better margin and better bounds on the expected classification error.
Illuminating the Dark Space of Health Care with Ambient Intelligence
Many deaths occur in hospitals and clinics due to preventable medical errors. In this talk, Dr. Fei-Fei Li will share her recent research into how electronic sensors and artificial intelligence can help medical professionals monitor and treat vulnerable patients in ways that improve outcomes while respecting privacy.
Reasoning about Perception
New methods to understand and generate visual scenes with novel combinations and complex interactions.
Non-Ethical AI and Challenges to Democracies
This talk discusses the unique challenges of AI techniques in a societal context. For example, privacy challenges of AI and how societal AI often radicalizes interpretations of social dynamics by reinforcing and sometimes generating existing inequalities in society. It addresses three questions: Can AI be ethical? What would count as a sufficient ethical AI? and, what are the challenges of non-ethical AI to democratic regimes? The talk reports on the work of the subcommittee of the Israeli National Intelligent Systems Project on Artificial Intelligence Ethics & Regulation, and the instrument it developed to help decision makers assess ethical challenges as they develop and maintain an AI system, and act accordingly to make their system more ethical.
International Law Perspectives on AI Ethics: Towards an Adaptive International Legal Ecosystem
The presentation will focus on challenges for the international community in its attempts to regulate AI ethics. It will provide some suggestions for a more adaptive international law framework to address these challenges.
Ethical Challenges of AI
Artificial intelligence (AI) is not just a new technology that requires regulation. It is a powerful force that is reshaping daily practices, personal and professional interactions, and environments. For the well-being of humanity, it is crucial that this power is used as a force of good. Ethics plays a key role in this process by ensuring that regulations of AI harness its potential while mitigating its risks. In this talk, I will discuss the ethical risks and opportunities that AI brings about and focus on how digital ethics – the branch of ethics that studies and evaluates moral problems related to data, algorithms, and corresponding practices and infrastructures – can contribute to shape the governance of AI to mitigate the risks and harness its potential for good.
Hardware challenges of deploying AI in ultra scale
The second wave of the AI revolution is emerging today. The first wave was of technology proof of concept, reaching to a phase where AI technology concepts are morphed from toy applications and models into real and complex AI pipelines, barely trainable, that can really create usable AI applications. The second wave is the production phase mainly focused on inference, that needs to handle the tsunami of real life AI usage in a vast variety of fields and usages (x10 scaling every year). The key for the success of the production phase of AI requires to integrate the AI special compute characteristics with practical scalable infrastructure characteristics. The lecture will go through the compute of the AI pipeline and its deployment in different type of use cases, Challenges of computing with large models over scalable infrastructure and the limitations of current compute infrastructure in scalable deployment.
Running deep learning applications on the edge - challenges and possibilities
In this talk I will briefly highlight some of the challenges we face when running deep learning applications on edge devices. The implications on the compute device that is supposed to execute those applications and the possibilities that are opened with new devices coming into this field, will follow. One such case, is the Hailo-8 solution which will be further explored.
Building the Next-gen Data Centers for AI Computing
AI technologies are growing rapidly in the industry, transforming the way businesses are created and operated. The computational requirements for AI grow at a dazzling rate, putting tremendous pressure and load on the Data Center. In this talk, we will discuss new infrastructure requirements introduced by AI workloads and what it takes to address them
Keynote: Short Text in the Wild
“Thanks for all the fish” ; “Happy bday grandma!” ; “Mercedes C-class Cabriolet” . Looks random, right? Well, maybe you know the old saying “one man’s trash is another woman’s treasure”. These texts, while very short, can be a virtual gold mine for many different business use-cases, some of which we tackle daily in our work. When we started working on unsupervised feature generation from very short texts, we started by looking into what’s already been done in the field, and to our surprise the answer was: not a lot. In this talk we’ll share some insights from our experience in dealing with short texts. We’ll start by defining what we mean by "short" in our unique case, why it’s interesting in various domains, where and why advanced out-of-the-box methods failed and finally, our tips for similar use-cases.
Keynote: How to train the largest NLP models on Nvidia Ampere GPU
Deep learning models have gained widespread popularity for natural language processing (NLP) and have undergone a revolution in performance and capabilities. Nvidia talk will cover the latest tools that gives researchers access to optimized software for training cutting edge models like Bert and GPT on the most powerful GPU architecture A100 in large scale.
Actionable insights from the content of websites: New methods implemented at Intel
For sales and marketing organizations within large enterprises, identifying and understanding new markets, customers and partners is a key challenge. Intel's Sales and Marketing Group (SMG) faces similar challenges while growing in new markets and domains and evolving its existing business. In today's complex technological and commercial landscape, there is need for intelligent automation supporting a fine-grained understanding of businesses in order to help SMG sift through millions of companies across many geographies and languages and identify relevant directions. We present a system developed in our company that mines millions of public business web pages, and extracts a faceted customer representation. We focus on two key customer aspects that are essential for finding relevant opportunities: industry segments (ranging from broad verticals such as healthcare, to more specific fields such as 'video analytics') and functional roles (e.g., 'manufacturer' or 'retail'). To address the challenge of labeled data collection, we enrich our data with external information gleaned from Wikipedia, and develop a semi-supervised multi-label, deep learning model that parses customer website texts and classifies them into their respective facets. Our system scans and indexes companies as part of a large-scale knowledge graph that currently holds tens of millions of connected entities with thousands being fetched, enriched and connected to the graph by the hour in real time, and also supports knowledge and insight discovery. In experiments conducted in our company, we are able to significantly boost the performance of sales personnel in the task of discovering new customers and commercial partnership opportunities.
Few-Shot Question Answering by Pretraining Span Selection
The latest methods in NLP involve pretraining a model on unlabeled text and then fine-tuning it on annotated input-output examples from the target task. While plain text is abundant, labeled examples must be collected by humans in a time-consuming and expensive process, which is often impractical to scale up via crowdsourcing due to the language (e.g. Hebrew) or domain (e.g. medicine). How can we train reasonable models with only a few annotated examples? In this talk, I present some first steps (and encouraging results!) towards training question answering models in a few-shot setting.
Compositional Generalization for Natural Language Understanding
While models for natural language understanding have exhibited impressive performance in recent years, they have been recently shown to struggle in handling compositions that were not been observed at training time. In this talk, I will describe recent work on developing models with an explicit inductive bias for compositionality, which leads to dramatically stronger compositional generalization.
Keynote: Fueling the Clinical AI Vehicle with Data
Real-world clinical data, coupled with advances in machine learning, is transforming clinical medicine, assisting in reaching more accurate diagnoses and optimizing personalized treatment decisions. Recently, Anthem released one of the largest, certified de-identified healthcare datasets, comprised of more than 45 million individuals over 12 years, available for use by researchers. This centralized environment is accessible for crowdsourcing new insights into healthcare's greatest challenges, such as improving health outcomes, providing the best experience for patients and reducing the cost of care. With this resource, in the past year Anthem Israel has led numerous AI driven projects in digital medicine. In this session, we will present a sample of the work we are conducting with our academic partners, including a risk stratification platform to accelerate findings, a platform comparing treatment efficacy available for cancer patients, AI-based approaches for effective antibiotic treatment and for using laboratory tests for improved diagnostics and studying the heterogenous physiological response to SARS-CoV-2 infection. Join the session to learn more about how we are making this resource available to researchers to promote new insights into healthcare's greatest challenges and how we are leveraging them to design the next generation of clinical medicine interventions.
Artificial Intelligence in Computational Hematopathology
I will present how we use AI to improve the diagnosis and understanding of severe blood diseases, i.e. acute myeloid leukemia and the myelodysplastic syndrom.
On Weight Clustering in Deep Learning
We consider several learning settings where learning dynamics tends to result in clusters of neuron weight vectors. We provide formal conditions under which this happens, and discuss implications on generalization.
Implicit Regularization in Deep Learning: Lessons Learned from Matrix and Tensor Factorization
The mysterious ability of deep neural networks to generalize is believed to stem from an implicit regularization, a tendency of gradient-based optimization to fit training data with predictors of low “complexity.” A major challenge in formalizing this intuition is that we lack measures of complexity that are both quantitative and capture the essence of data which admits generalization (images, audio, text, etc.). With an eye towards this challenge, I will present recent analyses of implicit regularization in matrix factorization, equivalent to linear neural networks, and tensor factorization, equivalent to a certain type of non-linear neural networks. Through dynamical characterizations, I will establish implicit regularization towards low rank, different from any type of norm minimization, in contrast to prior beliefs. Then, motivated by tensor rank capturing implicit regularization of non-linear neural networks, I will suggest it as a measure of complexity, and show that it stays extremely low when fitting standard datasets. This gives rise to the possibility of tensor rank explaining both implicit regularization of neural networks, and the properties of real-world data translating it to generalization.
Works covered in this talk were in collaboration with Sanjeev Arora, Wei Hu, Yuping Luo, Asaf Maman and Noam Razin.
XAI via Neural Entropy Estimation
Estimating the information diffusion in complex networks of variables and time series is a fundamental problem in information theory with many applications. We propose a scheme that uses the generalization abilities of cross-entropy in deep neural networks (DNNs) to improve entropy estimation and related information theoretic measures. The talk will include numerical examples (joint work with Yuval Shalev and Amichai Painsky)
AI for Social Good: Opportunities with AI
We will discuss emerging opportunities with the advancements in AI, how AI can help address societal problems, and opportunities for multidisciplinary AI research
Deep Learning for Investigating Archaeological Artefacts: the Levantine Ceramics Project
The Levantine Ceramics Project (LCP, https://www.levantineceramics.org/), initiated by Prof. Andrea Berlin of Boston University, is an open-access, interactive website that facilitates sharing archaeological ceramics information to a global community of researchers. With over 75,000 page views a year, the LCP is a growing data source dedicated to gathering thousands of high-quality photomicrographs of petrographic thin sections.
Ceramic petrography is a well-established analytical method, based on optical identification of minerals under a polarizing light microscope, to determine geological provenance and production techniques of clay artefacts. Petrography enables detailed reconstructions of ancient economies and their connections across vast geographical regions. However, interpretation of the data is time-consuming, expensive, highly subjective, and always depend on the petrographer's experience and expertise.
Photomicrographs on the LCP can be used to train Neural Networks to study mineralogy. Applying AI will lead to for accurate, objective insights about production, style, and movement of pottery and make it available to the entire archaeological community in a cheap and fast way.
Evolution and Standardization of Scientific Knowledge in the Early Modern Period: A Machine Learning Based Enquiry
The presentation offers an overview of our ongoing research. We will presenting the approach and digital tools used for analyzing a very large corpus of digitized historical sources, namely 360 early printed books, in order to create a large dataset of “knowledge atoms” comprising full texts, sections of books, annotations, paratexts, indexes and tables of contents, diagrams, scientific illustrations and computational tables, information about printers and publishers, etc. Our inquiry focuses on a comprehensive understanding of the evolution of the textbooks used for the teaching of cosmology and astronomy in European Universities from the end of the 15th century to 1650. The research aim is to reconstruct the evolution of the scientific knowledge in the frame of cosmology. Data are collected and systematized through the employment of a semantic database and finally analyzed by means of an approach dictated by complex system theories. For the extraction and classification of data, moreover, machine learning technology is employed as well.
Material Study of Dead Sea Scrolls using Machine Learning Methods
The Dead Sea Scrolls is a corpus of appx. 900 ancient scrolls, found in caves in the Judean Desert in highly fragmentary state, with the number of fragments reaching tens of thousands in various sizes. The corpus has been studied intensively for 70 years with important results for a formative period in the history of Judaism and Christianity. Even the smallest detail may entail a far-ranging historical conclusion. A recent digitization project by the Israel Antiquities Authority provides hi-resolution multispectral images of the entire corpus, with unprecedented ability to track minor material details. It is increasingly evident that these details can take the shape of data and be fed into advanced algorithms to enhance their analysis and yield a better grip of history. Initial steps in this direction were carried out in a project with TAU researchers Profs Nachum Dershowitz and Lior Wolf.
I shall present one pioneering study that is already pending publication. This study suggests machine learning based assembly of fragments of ancient papyrus. The first step has proved successful, and can ultimately – with further development - be applied to numerous other collections of ancient papyrus in many other languages. In addition I will present several more ideas that have been only partly pursued: matching fragments across their rich imaging history; establishing characteristic shapes of letters in various types of ancient handwriting. In this presentation I will not touch on the analysis of the text if the scrolls using NLP methods, which is a promising field in and of itself.
For details, click here: https://www.qumranica.org
Reconstructing Nazi Racial Classification: A Mass Data Acquisition Challenge
The Nazi state relied heavily on anthropometric measurements to determine racial affiliation. In particular, a select group of racial experts from the SS Race and Settlement Main Office (RuSHA) developed methods for diagnosing the different racial components of individuals, according to certain physical markers. Despite the extensive existent literature on Nazi racial policies, we still do not know how the SS racial experts practically determined one’s racial affiliation. The proposed project aims to explore this theme by scanning and analyzing hundreds of thousands of ‘racial-cards’ used by SS experts, and then retrieving the data from these cards, which, due to their vast numbers, cannot be done manually. The goal is therefore, first, to develop specifically customized digital technology (i.e., software) to extract data from these archival sources. Once retrieved, these data will help us uncover the various components that determined the process of racial selection – including entrenched racial perceptions, social stigmas, ideological biases, utilitarian considerations, as well as more mundane circumstances related to the timing of the examination (e.g., time of the day), bureaucratization, routinization, and the changing circumstances of the war. Our findings will open a window onto the intricate links between measurement, perception, social and ideological biases, and sorting methods, as these interacted during a most turbulent and fateful historical period in European history. The study is projected to have pertinent repercussions and lessons also for the present era, when governments and corporates throughout the world promote the use of facial recognition software, AI and racial profiling for sorting individuals.
Keynote: Reinforcement Learning
In this talk we will give a very brief overview of the field of Reinforcement Learning.
We will highlight the recent successes, look at the fundamental methodologies, and