Optimizing deep learning models for robustness and explainability in Intel OpenVino Internship (m/f/d)
Intel CorporationMunichUpdate time: April 22,2022
Job Description

Intel Labs is the company's world-class, industry-leading research organization, responsible for driving Intel's technology pipeline and creating new opportunities. The mission of Intel Labs is to deliver breakthrough technologies to fuel Intel's growth. This includes identifying and exploring compelling new technologies and high-risk opportunities ahead of business unit investment and demonstrating first-to-market technologies and innovative new usages for computing technology. Intel Labs engages the leading thinkers in academia and industry in addition to partnering closely with Intel business units. The position is in the Dependability Research Lab of Intel Labs.


We are looking for a candidate for a temporary (about 6 to 8 months) internship position in our research team in Munich, Germany. Ideally, the candidate is currently pursuing a Master's degree or similar from a relevant academic institute and is eager to strengthen and extend his/her skillset at the forefront of artificial intelligence (AI) technology. Together with the research team, the candidate will work on the optimization of robust and explainable AI models in the domains of automated driving, robotics, smart cities, or industrial control.
He/she has the chance to gain hands-on experience with industry-leading AI software tools such as the Intel OpenVino toolkit, a popular and powerful framework to optimize AI models on hardware.

Tasks

Together with the team you will:

  • understand and develop methods that can improve the robustness and explainability of deep learning models in the presence of simulated system perturbations, for example platform faults.

  • Evaluate the improvement and overhead of above methods in selected safety-critical use cases, for example automated driving object detection.

  • Implement selected solutions in Intel OpenVino to allow an easy and automated application by customers. Depending on the outcome of this work, results can become a part of the opensource toolkit.

  • Help to create and potentially present a demo of the above in the context of a relevant use case (e.g. robotics).


Qualifications

The ideal candidate meets the following criteria:

  • Pursuing studies in computer science, computer engineering, electrical engineering, mathematics, physics, or other related fields.

  • Good background in AI methods, especially CNNs, and their performance metrics.

  • Programming Skills in Python, preferably familiarity with Pytorch.

  • Knowledge about dependability, robustness, and explainability concepts in AI.

  • Excellent English is a must, German knowledge advantageous.


Please note that in order to be eligible for this position you need to be enrolled in a university program.

 

Inside this Business Group

Intel Labs is the company's world-class, industry leading research organization, responsible for driving Intel's technology pipeline and creating new opportunities. The mission of Intel Labs is to deliver breakthrough technologies to fuel Intel's growth. This includes identifying and exploring compelling new technologies and high risk opportunities ahead of business unit investment and demonstrating first-to-market technologies and innovative new usages for computing technology. Intel Labs engages the leading thinkers in academia and industry in addition to partnering closely with Intel business units.



Work Model for this Role

This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site.

DEInternJR0218876MunichIntel Labs

Get email alerts for the latest"Optimizing deep learning models for robustness and explainability in Intel OpenVino Internship (m/f/d) jobs in Munich"