Designing a machine-learning model for a certain task — such as image classification, disease diagnoses, and stock market prediction — is an arduous, time-consuming process. Experts first choose from among many different algorithms to build the model around. Then, they manually tweak “hyperparameters” — which determine the model’s overall structure — before the model starts training. An automated machine-learning (AutoML) system was developed to test and modify algorithms and hyperparameters iteratively and selects the best-suited models.
Recently, researchers from MIT, the Hong Kong University of Science and Technology (HKUST) and Zhejiang University have developed an interactive tool that lets users see and control how automated machine-learning systems work. The new tool, called ATMSeer, puts the analyses and control of AutoML methods into users’ hands. It takes as input an AutoML system, a dataset, and some information about a user’s task. Then, it visualizes the search process in a user-friendly interface, which presents in-depth information on the models’ performance. “We let users pick and see how the AutoML systems works,” says Kalyan Veeramachaneni, a principal research scientist in the MIT Laboratory for Information and Decision Systems (LIDS), who leads the Data to AI group. “You might simply choose the top-performing model, or you might have other considerations or use domain expertise to guide the system to search for some models over others.”
At the core of the new tool is a custom AutoML system, called “Auto-Tuned Models” (ATM), developed by Veeramachaneni and other researchers in 2017. Unlike traditional AutoML systems, ATM fully catalogues all search results as it tries to fit models to data. ATM takes as input any dataset and an encoded prediction task. The system randomly selects an algorithm class — such as neural networks, decision trees, random forest, and logistic regression — and the model’s hyperparameters, such as the size of a decision tree or the number of neural network layers.
For more detail, please click this link.