34One Proxy Device Is Enough for Hardware-Aware NeuralArchitecture SearchBINGQIAN LU,University of California, Riverside, United StatesJIANYI YANG,University of California, Riverside, United StatesWEIWEN JIANG,George Mason University, United StatesYIYU SHI,University of Notre Dame, United StatesSHAOLEI REN∗,University of California, Riverside, United StatesConvolutional neural networks (CNNs) are used in numerous real-world applications such as vision-basedautonomous driving and video content analysis. To run CNN inference on various target devices, hardware-aware neural architecture search (NAS) is crucial. A key requirement of efficient hardware-aware NAS isthe fast evaluation of inference latencies in order to rank different architectures. While building a latencypredictor for each target device has been commonly used in state of the art, this is a very time-consumingprocess, lacking scalability in the presence of extremely diverse devices. In this work, we address the scalabilitychallenge by exploitinglatency monotonicity— the architecture latency rankings on different devices are oftencorrelated. When strong latency monotonicity exists, we can re-use architectures searched for one proxydevice on new target devices, without losing optimality. In the absence of strong latency monotonicity, wepropose an efficient proxy adaptation technique to significantly boost the latency monotonicity. Finally, wevalidate our approach and conduct experiments with devices of different platforms on multiple mainstreamsearch spaces, including MobileNet-V2, MobileNet-V3, NAS-Bench-201, ProxylessNAS and FBNet. Our resultshighlight that, by using just one proxy device, we can find almost the same Pareto-optimal architectures as theexisting per-device NAS, while avoiding the prohibitive cost of building a latency predictor for each device.ACM Reference Format:Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Ren. 2021. One Proxy Device Is Enough forHardware-Aware Neural Architecture Search.Proc. ACM Meas. Anal. Comput. Syst.5, 3, Article 34 (Decem-ber 2021),34pages.https://doi.org/10.1145/34910461 INTRODUCTIONConvolutional neural networks (CNNs) are a most commonly used class of deep neural networks,offering human-level inference accuracy for numerous real-world applications such as vision-basedautonomous driving and video content analysis [21]. Going beyond the contentional server-onlyplatforms, CNNs have been deployed on increasingly diverse devices and platforms, includingmobile, ASIC and edge devices [46]. As the foundation of a CNN, the neural architecture cangreatly affect the resulting model performance such as accuracy and latency. Thus, optimizing thearchitecture through hardware-aware neural architecture search (NAS) is crucial and being activelystudied [5,13,34,40,41,45].The exponentially large search space consisting of billions of or even more architectures rendersNAS a very challenging task [15,40,41,43,45,47]. The key reason is that evaluating and rankingthe architectures in terms of metrics of interest (e.g., accuracy and latency) can be extremely∗Corresponding author: Shaolei Ren (sren@ece.ucr.edu)Authors’ addresses: Bingqian Lu, blu029@ucr.edu, University of California, Riverside, 900 University Ave., Riverside,California, 92521, United States; Jianyi Yang, jyang239@ucr.edu, University of California, Riverside, 900 University Ave.,Riverside, California, 92521, United States; Weiwen Jiang, wjiang8@gmu.edu, George Mason University, 4400 UniversityDrive, Fairfax, Virginia, 22030, United States; Yiyu Shi, yshi4@nd.edu, University of Notre Dame, 257 Fitzpatrick Hall, NotreDame, Indiana, 46556, United States; Shaolei Ren, sren@ece.ucr.edu, University of California, Riverside, 900 University Ave.,Riverside, California, 92521, United States.2021. 2476-1249/2021/12-ART34https://doi.org/10.1145/3491046Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:2 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Rentime-consuming. As a result, many studies have been focused on reducing the cost1of training andevaluating the architecture accuracy, including reinforcement learning-based NAS with accuracyevaluated based on a small proxy dataset [52], differentiable NAS [45], one-shot or few-shot NAS[4,9,51], NAS assisted with an accuracy predictor [15,43], among many others.In addition to speeding up accuracy evaluation, reducing the cost of assessing the inferencelatency on a target device is equally important for efficient hardware-aware NAS [9,19,33,40].The naive method of measuring the latency for each architecture can lead to a total search timeexceeding several weeks or even months, whereas using the floating-point operations (FLOPs) as adevice-agnostic proxy may not accurately reflect the true inference latency on different devices[40]. As a result, state-of-the-art (SOTA) hardware-aware NAS has mainly relied on device-specificlatency lookup tables or predictors [5,10,13,15,19,34,43,47].Nonetheless, building a latency predictor for a target device requires significant engineeringefforts and can be very slow. For example, [10] measures average inference latencies for 5k sampleDNNs on a mobile device and uses the results to build a latency lookup table for that specific device.Assuming the ideal scenario of 20 seconds for each measurement (to average out randomnessper the TensorFlow guideline [22]) and non-stop measurement, it can take 27+ hours to buildthe latency predictor for one single device [10]. Similarly, it is reported by [15] that 350k recordsare collected for building a latency predictor for just one device. Even by measuring latencies onsix devices in parallel, the authors of [29] report on OpenReview that they spentone monthtocollect latency data on the small NAS-Bench-201 space and build latency predictors for another twodatasets on the FBNet space. More recently, kernel-level latency predictors that capture complexprocessing flows of different neural execution units are proposed, but it takes up to 4.4 days for justcollecting the latency measurements on one edge device [50]. All these facts highlight the crucialpoint that building a latency predictor for a target device — a key step of SOTA hardware-awareNAS — is costly and cannot be taken for granted as free lunch. 0500100015002000Unique SoCs0.00.20.40.60.81.0CDF225 SoCs, 95%50 SoCs, 65%Fig. 1. Device statistics for Facebookusers as of 2018 [46].Worse yet, the target devices for CNN deployment are ex-tremely diverse, ranging from mobile CPUs, ASIC, edge devicesto GPUs. For example, even for the mobile devices alone, asshown in Fig.1, there are more than two thousand system-on-chips (SoCs) available in the market, and only top 30 SoCs caneach have over 1% of the share [46]. Importantly, the diverseset of devices have different latency collection pipelines, pro-gramming environment, and/or hardware domain knowledgerequirement [29]. Thus, in the presence of extremely diversetarget devices, the combined cost of building device-specificlatency predictors for hardware-aware NAS is prohibitivelyhigh and increasingly becoming a key bottleneck for scalablehardware-aware NAS. In addition, this challenge is further magnified by the fact that buildingdevice-specific latency predictors isnota one-time cost: varying the input resolution and/or out-put classes also requires new latency predictors (e.g., two device-specific latency predictors arebuilt, each for one dataset, on the FBNet space [29]). Consequently, how to efficiently scale uphardware-aware NAS for extremely diverse target devices has arisen as a critical challenge.Contributions.In this paper, we focus on reducing the total latency evaluation cost for scalablehardware-aware NAS in the presence of diverse target devices across different platforms (e.g.,mobile platform, FPGA platform, desktop/server GPU, etc.). Concretely, we show that latency1In this paper, “cost” also interchangeably refers to computational complexity: a higher complexity requires more computa-tional resources (measured in, e.g., machine hours) and hence a higher monetary cost, too.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:3monotonicity commonly exists among different devices, especially devices of the same platform.Informally, latency monotonicity means that the ranking orders of different architectures’ latenciesare correlated on two or more devices. Thus, with latency monotonicity, building a latency predictorfor just one device that serves as a proxy — rather than for each individual target device as instate of the art [9,15,29] — is enough. Even when a target device has a weak monotonicity withthe default proxy device (e.g., a mobile phone proxy vs. a target edge TPU), we use an efficientadaptation technique which, by measuring latencies of a small number of architectures on thetarget device, significantly boosts the latency monotonicity between the adapted proxy device andthe target device.We validate our approach by considering various search spaces and running experiments withdevices of different platforms, including mobile, desktop GPU, desktop CPU, edge devices andFPGA. Our results show that, using just one proxy device, there is almost no Pareto optimality losscompared to architectures specifically optimized for each target device. In addition, we also considerthe recent latency datasets [19,29,50], and confirm further that one proxy device is enough forhardware-aware NAS.2 STATE OF THE ART AND LIMITATIONS OF HARDWARE-AWARE NASIn this section, we provide an overview of the existing (hardware-aware) NAS algorithms as wellas SOTA approaches to reducing the performance evaluation cost, and highlight their limitations.2.1 OverviewNeural architecture is a key design hyperparameter that affects the inference accuracy and latencyof DNN models. In Fig.2, we show an example architecture, which is found by searching over thepossible layer-wise kernel sizes, expansion ratio, and block depth in the MobileNet-V2 search spaceusing evolutionary search [9].                                          Fig. 2. An example architecture in the MobileNet-V2 search space, which achieves 70.2% accuracy on ImageNetand 71ms average inference latency on S5e. The text “/u1D44D1x/u1D44D2x/u1D44D3” the input size for each layer.The available architecture space is exponentially large, often consisting of billions of or evenmore choices (e.g., >1019in [9]). To address the complexity challenge, NAS has recently beenproposed to efficiently automate the discovery of neural architectures that exceed the performanceof expert-designed architectures [52]. Next, we provide a summary of existing NAS algorithms.2.1.1 NAS Without a Supernet.Many prior NAS algorithms can be broadly viewed as “NASwithouta supernet”, where the search process is entangled with the model training process [35,40,52].Specifically, as illustrated in the left subfigure of Fig.3, the NAS process is governed by a controller(e.g., a reinforcement learning agent): given each candidate architecture produced by the controller,the model is trained on the training dataset and then evaluated for its performance, based on whichthe controller produces another candidate architecture. This process repeats until convergence orthe maximum search iteration is reached. Techniques to reduce the search cost include training onpart of the training dataset, a small proxy dataset, using Bayesian optimization or reinforcementProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:4 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Ren Fig. 3. Overview of NAS algorithms. Left: NAS without a supernet. Right: One-shot NAS with a supernet.learning to reduce the number of sampled candidate architectures, parameterizing the architecturesand using gradients of the loss to guide the search and training simultaneously, among others[31,34,38–40,52]. Nonetheless, the search cost for even a single device can still take up to 100+GPU hours, lacking scalability in the presence of numerous heterogeneous devices [10,45].2.1.2 One-shot NAS.In view of the extremely diverse devices and platforms for model deployment,one-shot NAS and its variants such as few-shot NAS have recently been proposed to reduce thesearch cost by exploiting the weight sharing mechanism [4,5,9,14,23,36,48,51]. Concretely, asillustrated in the right subfigure of Fig.3, the key idea of one-shot NAS is to decouple the trainingprocess from the search process: pre-train a super large model (calledsupernet) whose weight isshared among all the candidate architectures, and then use a separate search process to discoveroptimal architectures that inherit the weights from the supernet. For example, in SOTA algorithmssuch as APQ, ChamNet, BigNAS and FBNet-V3 [9,14,15,43,48], a supernet is pre-trained first,which is then followed by a search process based on evolutionary algorithms or reinforcementlearning to find an optimal architecture.While pre-training the supernet is more costly than training an individual network, the trainingcost is one-time2for each learning task and, when amortized over hundreds of target devices, willbe much more affordable. For example, with the recent once-for-all algorithm [9], the amortizedtraining cost for each target device is around 12 hours given a modest size of 100 devices, andfurther less given more devices.2.2 Current Practice for Reducing the Cost of Performance EvaluationWith aO(1)model training cost incurred by one-shot NAS, the cost of performance evaluation —accuracy and latency evaluation — increasingly becomes a bottleneck.Accuracy evaluation.For each candidate architecture, the time needed to evaluate the inferenceaccuracy (even on a small proxy/validation dataset) is in the order of minutes. Thus, to expedite theaccuracy evaluation, SOTA NAS algorithms have leveraged an accuracy predictor: first measuringthe accuracies of sample architectures (extracted from the supernet) and then building a machinelearning model [14,15,43]. Therefore, the candidate architectures can be ranked based on theirpredicted accuracies, speeding up the runtimeprocess for NAS. Since the inference accuracy isevaluated based on the testing dataset, the accuracy predictor is device-independent and can bere-used for different target devices, incurring a fixed one-time cost ofO(1).Latency evaluation.Measuring the actual latency for each candidate architecture takes about20 seconds or more (to average out the random variations as per TensorFlow-Lite guideline [22]and also suggested by [10]). Meanwhile, the total number of candidate architectures sampled by2With the optimal architecture found by NAS, additional model updates (e.g., by training over the entire dataset or fine-tuning the weights) may still be needed to further improve the accuracy, but this will typically not affect the accuracyrankings of different architectures [34] and is orthogonal to NAS whose goal is to decide an optimal neural architecture.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:5Table 1. Cost Comparison of Hardware-aware NAS Algorithms for/u1D45BTarget Devices.Algorithm Search Model Accuracy Latency Total CostMethod Training Evaluation Evaluation(Machine-hours)MNasNet [40] RLO(/u1D45B)O (/u1D45B)O (/u1D45B)6912/u1D45BFBNet [45] GradientO(/u1D45B)O (/u1D45B)O (/u1D45B)216/u1D45BProxylessNAS [10] GradientO(/u1D45B)O (/u1D45B)O (/u1D45B)(200+/u1D45ℎ/u1D43F)/u1D45BNetAdapt [47] LoopO(1)O (/u1D45B)O (/u1D45B)/u1D45ℎ/u1D447+(/u1D45ℎ/u1D434+/u1D45ℎ/u1D43F)/u1D45BAPQ [43] EvolutionaryO(1)O (1)O (/u1D45B)2400+/u1D45ℎ/u1D434+/u1D45ℎ/u1D43F/u1D45BChamNet [15] EvolutionaryO(1)O (1)O (/u1D45B)/u1D45ℎ/u1D447+/u1D45ℎ/u1D434+/u1D45ℎ/u1D43F/u1D45BOnce-for-All [9] EvolutionaryO(1)O (1)O (/u1D45B)1200+/u1D45ℎ/u1D434+/u1D45ℎ/u1D43F/u1D45Ba NAS algorithm is typically in the order of 10k or even more [12,15,40], thus settling the totallatency evaluation time to be 50+ hours for justonetarget device.Using the FLOPs as a device-agnostic proxy cannot accurately reflect the true latency rank-ings of different architectures on a target device [40]. Instead, to reduce the latency evaluationcost, SOTA hardware-aware NAS algorithms have most commonly used latency predictors — pro-filing/measuring the latencies for sample architectures in advance and then building a latencypredictor (either a lookup table or machine learning model) [9,10,15,50]. Then, the latency pre-dictor is utilized to guide the NAS process, without measuring the actual latency on the targetdevice.2.3 LimitationsDespite the recent progress, SOTA hardware-aware NAS algorithms still cannot scale up in view ofthe extremely diverse target devices for model deployment.Summary of total search cost.Given/u1D45Btarget devices, we summarize in Table1the totalsearch costs, measured in machine-hours, of a few representative hardware-aware NAS algorithms.If the quantitative evaluation cost is not reported for an algorithm, we use/u1D450/u1D447,/u1D450/u1D434,/u1D450/u1D43Fto denote itsmodel training cost, accuracy evaluation cost, and latency evaluation cost, respectively. Empirically,for each device,/u1D450/u1D43Fis in the order of at least a few tens of hours [10,19] or even hundreds of hours[29,50]. Thus, we can see that the latency evaluation cost is a significant or even dominant part ofthe total search cost, especially when/u1D45Bincreases.While the actual execution time of NAS may be further reduced by parallel processing, the totalcost in terms of machine-hours does not decrease. For example, latency measurements on multipledevices in parallel and assigning more GPUs for supernet training can both speed up the overallNAS process, but the total resources needed by NAS still remain unchanged (or possibly evenhigher due to communications overheads among GPUs for distributed training). For this reason,machine-hour is a more accurate and widely-used metric for total resource expenditure in NAS[9,40,43,45].Challenges.In the current practice, building a latency predictor for each target device requiressignificant engineering efforts and can be very slow, while it is often excluded from the total costcalculation [15,30,42,43,45,47]. Moreover, the diverse set of target devices have different latencycollection pipelines, programming environment, and/or hardware domain knowledge requirement,all of which add to the significant challenges of building a latency predictor [29].The challenges of building latency predictors have been increasingly recognized and motivatedsome latest studies on latency predictors to facilitate hardware-aware NAS research. For example,[29] releases latency datasets/predictors for six devices on the NAS-Bench-201 space and FBNetspace. Even by measuring latencies in parallel, the authors of [29] report on OpenReview that theyspentone monthto collect latency measurement. Another recent study [50] builds a kernel-levellatency predictor, taking up 1–4.4 days for latency measurement on each device depending on howProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:6 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Renpowerful the device is. Nonetheless, these approaches are not scalable, and the latency predictorsbuilt by these studies are all specific to their limited set of devices.We can conclude that, in the presence of extremely diverse target devices, the combined costof building latency predictors for hardware-aware NAS is prohibitively high atO(/u1D45B). This hasincreasingly become a bottleneck for scalability.3 PROBLEM FORMULATION, INSIGHTS, AND PRACTICAL CONSIDERATIONWe present the problem formulation for hardware-aware NAS, show the key insights for when wecan reduce the latency evaluation cost toO(1), and finally discuss practical considerations.3.1 Problem FormulationThe general problem of hardware-aware NAS can be formulated as follows:maxx∈Xmax/u1D714x/u1D44E/u1D450/u1D450/u1D462/u1D45F/u1D44E/u1D450/u1D466.alt(x,/u1D714x)(1)/u1D460./u1D461., /u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x;d)≤/u1D43Fd(2)wherexrepresents the architecture,Xis the search space under consideration,/u1D714xis the networkweight given architecturex,/u1D43Fdis the average inference latency constraint, andd∈Ddenotes adevice withDbeing the device set. Note that/u1D44E/u1D450/u1D450/u1D462/u1D45F/u1D44E/u1D450/u1D466.alt(x,/u1D714x)is measured on a dataset independentof the deviced, and can also be replaced with a certain loss function (e.g., cross entropy). By varying/u1D43Fdbetween its feasible range[/u1D43Fd,min,/u1D43Fd,max], we can obtain a set ofPareto-optimal architectures,denoted byPd={x∗(/u1D43Fd;d),for/u1D43Fd∈[/u1D43Fd,min,/u1D43Fd,max]}.Remark.We offer the following remarks on the problem formulation. First, due to the non-convexity and combinatorial nature, the obtained architectures by using approximate methods (e.g.,evolutionary search [43]) to solve Eqns.(1)(2)may not be globally Pareto-optimal in a strict sense;instead, the notation of Pareto-optimality (or simply, optimality) in the context of NAS usuallymeans a satisfactory architecture that outperforms or is very close to SOTA results [1,14,40].Second, as recently shown in [29], the inference latency and energy of an architecture on a device arevery strongly correlated. That is, an energy constraint can be implicitly mapped to a correspondinglatency constraint. Thus, like in [5,13,40,45,47], we only consider the inference latency constraintin our formulation for the convenience of presentation.3.2 Key InsightsBy observing the NAS problem in Eqns.(1)(2), achievingO(1)latency evaluation cost may seem veryunlikely. The reason is that the inference latency/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x;d)is highly device-specific — with anew device, the latency function will change in general, and so will the Pareto-optimal architecturesaccordingly. We notice, however, that the Pareto-optimal architectures for two different devices canactually be identical if their latency functions aremonotonic, as formally defined and proved below.Definition 1 (Latency Monotonicity).Given two different devicesd1∈Dandd2∈D, if/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x1;d1)≥/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x2;d1)and/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x1;d2)≥/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x2;d2)hold simultaneously for anytwo neural architecturesx1∈Xandx2∈X, then the two devicesd1andd2are said to satisfy latencymonotonicity./squaresolidProposition 3.1.If two devicesd1∈Dandd2∈Dstrictly satisfy latency monotonicity, then theyhave the same set of Pareto-optimal architectures, i.e.,Pd1=Pd2, wherePd/u1D456={x∗(/u1D43Fd/u1D456;d/u1D456),for/u1D43Fd/u1D456∈[/u1D43Fd/u1D456,min,/u1D43Fd/u1D456,max]}for/u1D456=1,2.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:7Proof.DefineX/u1D43Fd1,d1as the set of architectures satisfying/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x;d1)≤/u1D43Fd1. By latencymonotonicity, we can find another constraint/u1D43Fd2such thatX/u1D43Fd1,d1=X/u1D43Fd2,d2. In other words, thelatency constraint/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x;d1)≤/u1D43Fd1is equivalent to/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x;d2)≤/u1D43Fd2. Therefore, device-aware NAS formulated in Eqns.(1)(2)for devicesd1andd2are equivalent, sharing the same set ofPareto-optimal architectures./squareProposition3.1guarantees that, for any two devices satisfying latency monotonicity, we onlyneed to run device-aware NAS ononedevice, avoiding the cost of numerous latency measurementsand building a separate latency predictor for each device. The key reason is that in NAS, it is thearchitecture’s accuracy and latency performance ranking that really matters for Pareto-optimality.Consequently, if latency monotonicity is satisfied among all the target devices, the latency evaluationcost can be kept asO(1).3.3 Practical ConsiderationTo quantify the degree of latency monotonicity in practice, we use the metric of Spearman’s RankCorrelation Coefficient (SRCC), which lies between -1 and 1 and assesses statistical dependencebetween the rankings of two variables using a monotonic function. The greater the SRCC of CNNlatencies on two devices, the better the latency monotonicity. SRCC of 0.9 to 1.0 is usually viewedasstronglydependent in terms of monotonicity [3].While Proposition3.1does not strictly hold when the SRCC is less than 1.0, we note that asufficiently high SRCC (e.g., around 0.9 in our experiments) is already good enough in practice. Thisis due in great part to imperfection/approximation in other aspects of the NAS process. Concretely,in SOTA hardware-aware NAS algorithms [15,40,43], the accuracy predictor (or the accuracymeasured on a small proxy dataset) only has a SRCC value of around 0.9 with the true accuracy.Thus, given the imperfection of accuracy evaluation, strictly satisfying the latency monotonicitydoes not offer substantial benefits.4 LATENCY MONOTONICITY IN THE REAL WORLDWe now investigate latency monotonicity in the real world and show that it commonly existsamong devices, especially of the same platform.4.1 Intra-Platform Latency MonotonicityWe empirically show the existence of strong latency monotonicity among devices of the sameplatform, including mobile, FPGA, desktop GPU and CPU.Mobile platform.We first empirically measure the actual latencies of CNN models on fourmobile devices: Samsung GalaxyS5e,TabA,LenovoMoto Tab, andVankyoMatrixPad Z1 (alow-end device). The details of deveice specifications are listed in Table2. We randomly sample10k models from the MobileNet-V2 space [37] (details in Section6). Then, we deploy these modelson the four devices and calculate their average inference latencies. We show the actual latencies onS5e, Lenovo, Vankyo versus TabA in Fig.4(a), where each dot represents one CNN model.We see that when the sampled CNN models run faster on TabA, they also become faster on theother devices. In Fig.4(a), the maximum standard deviation (denoted by the vertical line withineach bin) is 1.3% for Vankyo, while it is negligibly 0.6% and 0.84% for Lenovo and S5e. Thus, latencymonotonicity is well preserved on these devices. We further show the SRCC values of these 10ksampled model latencies on our four mobile device in Fig.4(b)with heatmap. We see that SRCCbetween any pair of our mobile devices is larger than 0.98, implying strong latency monotonicity.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:8 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei RenDevice Abbrev. ChipsetCPU(GHz)CoresRAM(GB)RAM Freq.(MHz)Peak Perf.(GFLOPs/sec)Mem. Bandwidth(GB/sec)Samsung Galaxy Tab S5eS5eSnapdragon 670 2 8 4 1866 40.6 14.93Samsung Galaxy Tab ATabASnapdragon 429 2 4 2 933 15.3 7.46Lenovo Moto TabLenovoSnapdragon 625 2 8 2 933 26.5 7.5Vankyo MatrixPad Z1VankyoN/A 1.5 4 1 933 N/A N/ATable 2. Device specifications. Full details are not available for Vankyo. (a) (b) (c) (d)Fig. 4. Empirical measurement of latency monotonicity. (a)(c) Black vertical lines denote the standard deviationof latency data points within each bin, with the center denoting the average. (b)(d) SRCC of 10k sampledmodel latencies on different pairs of devices. (a) Top-300 Mobile Phones (b) Top-150 Mobile PhonesFig. 5. CDF of SRCC values of DNN models on mobile phones and SoCs. The annotation “high/mid/low”represents the highest/middle/lowest 33.3% of the devices.AI-Benchmark data.To examine latency monotonicity at scale, we resort to the AI-Benchmarkdataset showing DNN inference latency measurements on diverse hardware [2]. Considering top-300 smartphones (ranging from Huawei Mate 40 Pro to Sony Xperia Z3) and top-150 mobile SoCs(ranging from HiSilicon Kirin 9000 to MediaTek Helio P10) ranked by the metric “AI-Score” [25],we show in Fig.5the SRCC values of latency rankings based on the 22 DNN models includingboth floating-point and quantized models (e.g., MobileNet-V2-INT8 and MobileNet-V2-FP16) listedin the dataset. We see that latency monotonicity is well preserved at scale. For example, amongthe top-100 mobile phones, SRCC values among 50+% ofanydevice pairs are higher than 0.9 (avery strong ranking correlation). While the AI-Benchmark dataset is built for orthogonal purposesand includes models from different search spaces, the resulting SRCC values, along with our ownexperiments, still provide a good reference and show reasonable latency monotonicity for mobiledevices at scale.Other platforms.Going beyond the mobile platform, we also perform experiments to showlatency monotonicity on other platforms: desktop CPU, GPU and FPGA.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:9IndexComputation Design Communication Design/u1D447/u1D45A/u1D447/u1D45B/u1D447/u1D45A(/u1D451)/u1D43C/u1D45D/u1D442/u1D45D/u1D44A/u1D45D1 160 12 576 11 8 132 160 12 576 5 5 223 160 12 576 12 13 174 160 12 576 10 10 105 130 12 832 10 10 106 100 16 832 10 10 107 220 8 704 10 10 108 100 16 832 6 14 109 100 18 704 10 10 10Table 3. Nine FPGA specifications on Xilinx ZCU 102 board. (a) Desktop CPU (b) Desktop GPU (c) FPGAFig. 6. Latency monotonicity on non-mobile platforms. Black vertical lines denote the standard deviation oflatency data points within each bin, with the center denoting the average.We build latency lookup tables for three desktop CPUs: Intel Core i7-4790, Intel Core i7-4770HQ, andE5-2673 v3. In addition, we consider four NVIDIA GPUs: TeslaT4, TeslaK80, QuadroM4000, and QuadroP5000. For the FPGA platform, we configure nine subsystems for an XilinxZCU 102 FPGA board to create nine different FPGAs following the hardware design space in [26].The detailed configuration for FPGAs is shown in Table3. “Computation Design" is the computationsubsystem design,/u1D447/u1D45A,/u1D447/u1D45Bare loop tiling parameters for input and output feature maps, and/u1D447/u1D45A(/u1D451)denotes the parameter for depth-wise separable convolution. “Communication Design" representsthe communication subsystem design, where/u1D43C/u1D45D,/u1D442/u1D45D, and/u1D44A/u1D45Dare communication ports allocatedfor input feature maps, output feature maps and weights, respectively. We measure CNN modellatency on nine Xilinx ZCU 102 boards shown in Table3, using the performance model in [26].We consider latencies for the same set of 10k models as in Fig.4, and plot the results in Figs.6and7, respectively. We see thatwithineach platform, latency monotonicity is generally very wellpreserved, with most SRCC values close to or above 0.9+. In addition, we also show in Fig.7theSRCC between the model FLOPs and the actual inference latency, confirming the prior observationthat FLOP may not accurately reflect the true latency performance [29,45,50].Next, to complement our own measurement, we also examine latency monotonicity by leveragingthird-party latency predictors and measurements results on other devices. The results are availablein AppendixB.1and further corroborate our finding.4.2 Inter-Platform Latency MonotonicityWe choose one FPGA (Xilinx ZCU 102), one desktop CPU (Intel Core i7-4790), and one desktopGPU (TeslaT4) as cross-platform devices. We show the latency monotonicity results and SRCCvalues for the same set of 10k models in Figs.4(c)and4(d), respectively. It can be seen that latencyProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:10 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Ren (a) Desktop CPU (b) Desktop GPU (c) FPGAFig. 7. SRCC of 10k sampled model latencies on different pairs of non-mobile devices. Specification of nineFPGAs in Fig.7(c)is listed in Table3.rankings are only moderately correlated for cross-platform devices. The SRCC values are lowerthan in the case of mobile device pairs (Fig.4(b)), since mobile devices often differ significantlyfrom desktops/FPGAs.Our finding is also confirmed in the appendix by considering the six cross-platform devices onthe NAS-Bench-201 [17] and FBNet[45], and four devices on MobileNet-V3 using nn-Meter [50].4.3 Roofline AnalysisWe now explain the empirically observed latency monotonicity based on roofline analysis, whichis a methodology for visual representation of hardware platform’speakperformance as a functionof the operational intensity, which identifies the bottleneck of the system [44]Fig.8(a)shows the theoretical roofline model of two mobile devices (Samsung Galaxy S5e andTabA) plotted according to their reported hardware specification listed in Table.2. When operationalintensity is low (linear slope region), memorybandwidth is the limiting factorfor programspeed(i.e., memory-bound); when operational intensity is high (horizontal region), peak FLOPs ratebecomes the bottleneck (i.e., compute-bound).Suppose that we have two devicesd1andd2with memory bandwidths/u1D435d1and/u1D435d1, respectively,and two CNN models of architecturesx1andx2with operational intensities/u1D442/u1D43Cx1and/u1D442/u1D43Cx2, respec-tively. Next, we show that latency monotonicity is guaranteed to hold for two devices if CNNmodels are either memory-bound or compute-bound on both devices.Memory-bound.In the memory-bound region, the slope in the roofline model of a device is thebandwidth, and the resulting performance is the bandwidth multiplied by the program’s operationalintensity. Assuming thatx1is slower thanx2on deviced1without loss of generality, we have/u1D439/u1D43F/u1D442/u1D443x1/u1D442/u1D43Cx1·/u1D435d1>/u1D439/u1D43F/u1D442/u1D443x2/u1D442/u1D43Cx2·/u1D435d1. Then, by multiplying both sides by/u1D435d1/u1D435d2, we obtain/u1D439/u1D43F/u1D442/u1D443x1/u1D442/u1D43Cx1·/u1D435d2>/u1D439/u1D43F/u1D442/u1D443x2/u1D442/u1D43Cx2·/u1D435d2, i.e.,x1isalso slower thanx2on deviced2. Thus, latency monotonicity holds for the two devicesd1andd2.Compute-bound.Likewise, if CNN models fall into the compute-bound region for two devices,then we can also establish latency monotonicity using a similar logic.For search spaces with models that span across both memory-bound and compute-bound regions,the latency monotonicity may not be strong (which we shall address in this paper). Moreover, theroofline analysis only provides asufficientcondition for latency monotonicity under the assumptionthat devices run at their peak performances (in terms of FLOPs/sec). Thus, we experimentally showthe actual performance of CNN models on our four mobile devices shown in Table2.We measure the actual attainable peak performance of our four devices with the tool in [24], aroofline model specially for mobile SoCs. Our results show that the sampled CNN models (withProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:11 (a) Theoretical roofline model (b) Sampled modelsFig. 8. (a) Theoretical roofline model is plotted according to hardware specification of S5e and TabA. (b) Blackvertical lines denote the standard deviation of data within each bin, with the center denoting the average. 0.1 1 10 0.01 0.1 1 10 1009.5 GFLOPs/sec (Max)L1 - 79.2 GB/sDRAM - 11.0 GB/sGFLOPs / secFLOPs / ByteEmpirical Roofline Graph (a) S5e 0.1 1 10 0.01 0.1 1 10 1005.6 GFLOPs/sec (Max)L1 - 34.1 GB/sL2 - 2.1 GB/sDRAM - 1.6 GB/sGFLOPs / secFLOPs / ByteEmpirical Roofline Graph (b) TabA 0.1 1 10 100 0.01 0.1 1 10 10011.2 GFLOPs/sec (Max)L1 - 92.9 GB/sDRAM - 5.0 GB/sGFLOPs / secFLOPs / ByteEmpirical Roofline Graph (c) Lenovo 0.1 1 10 0.01 0.1 1 10 1002.9 GFLOPs/sec (Max)L1 - 21.0 GB/sDRAM - 1.6 GB/sGFLOPs / secFLOPs / ByteEmpirical Roofline Graph (d) VankyoFig. 9. Empirical roofline models of devices in Table2measured with Gables [24].2.6 to 5.4 FLOPs/Byte) are all in thecompute-bound region for the devices. We randomly sample10000 models from the MobileNet-V2 [37]. The empirical roofline results are shown in Fig.8(b). Theoperational intensity of the sampled models ranges from 2.6 to 5.4 FLOPs/Byte, while the devices’actual performances as shown in Fig.9are much lower than their peaks and vary for differentmodels. Specifically, the ridge operational intensity of S5e, Lenovo, and Vankyo are less than oraround 2 FLOPs/Byte, while TabA has a threshold of 3 FLOPs/Byte. Thus, most of our sampledmodels reside in the compute-bound region of these devices, except for those with operationalintensity less than 3 FLOPs/Byte on TabA. This partially explains the strong latency monotonicitythat we empirically observe in Fig.4(b).5 HARDWARE-AWARE NAS WITH ONE PROXY DEVICESection4demonstrates good latency monotonicity among devices of the same platform, but this isnot always the case, especially for devices across different platforms. To address the cases of lowmonotonicity, we propose efficient transfer learning based on the proxy device.5.1 Necessity of Strong Latency MonotonicityWe first highlight the necessity of strong latency monotonicity for finding optimal architectures onthe target device. An interesting and challenging case is when latency monotonicity is not satisfied,and this is not uncommon in practice as shown in Section4. In such cases, the optimal architecturessearched on one device can be far from optimality on another device. To see this point, we showin Fig.11(a)the performance of architectures found on different devices using the MobileNet-V2search space. All latencies are measured on S5e (Mobile), and the architectures directly found forProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:12 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei RenAlgorithm 1Hardware-Aware NAS With One Proxy Device1:Inputs:Target deviced, proxy deviced0with its latency predictor/u1D43Fd0(x)and Pareto-optimalarchitecture setPd0, small sample architecture setA, SRCC threshold/u1D446/u1D461ℎ2:Output:Pareto-optimal architecturePd3:Measure/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x;d)forx∈A;4:Estimate SRCC/u1D446d,/u1D451for sample architectures inA;5:if/u1D446d,d0≥/u1D446/u1D461ℎthen6:SetPd=Pd0, or re-run NAS (e.g., evolutionary search) based on/u1D43Fd0(x)to obtainPd;7:else8:Use Eqn. (3) to obtain/u1D43Fd0,d(x)based on measured/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(x;d)forx∈A;9:Run NAS based on/u1D43Fd0,d(x)to obtainPd;10:end if11:Measure latencies for architecturesx∈Pdon deviced, and remove non-Pareto-optimal onesfromPd;S5e are Pareto-optimal ones. Nonetheless, when performing NAS on two other (proxy) devices —4790 (Desktop CPU) and T4 (Desktop GPU) — which both have low SRCC values with S5e, thesearched architectures are highly sub-optimal. Thus, given weak latency monotonicity, the Paretooptimality ofPd0on the proxy deviced0does not hold on the target deviced, calling for remediesto boost the latency monotonicity.5.2 OverviewOur scalable hardware-aware NAS approach is illustrated in Fig.10and described in Algorithm1. Fig. 10. Overview of using one proxy device for hardware-aware NAS.Prerequisite.The prerequisite step is to select a proxy deviced0and run SOTA hardware-awareNAS to find a setPd0of Pareto-optimal architectures for the proxy.Checking latency monotonicity.Given a new target device, we check whether strong latencymonotonicity is satisfied between the proxy device and the target device, by estimating the SRCCbased on a small set of sample architecturesAand comparing it against a threshold.•When strong latency monotonicity holds.With strong latency monotonicity, the target device’sPareto-optimal architecture setPdis also likely the same as proxy device’sPd0. Alternatively, wecan also re-run evolutionary search based on the proxy device’s latency predictor to obtain morearchitectures, which are in turn also likely optimal ones for the target device.•When strong latency monotonicity does not hold.We propose an efficient transfer learningtechnique — adapting the proxy’s latency predictor to the target device. By doing so, we can quicklyfind optimal architectures for the target device, yetwithoutfirst measuring latencies of thousandsof architectures and then building a latency predictor.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:13 (a) NAS results with low SRCC values (b) Average Estimated SRCC (c) Standard DeviationFig. 11. (a) Architectures by evolutionary search in the MobileNet-V2 search space. All latencies are measuredon S5e (Mobile). Architectures searched on 4790 (Desktop CPU) and T4 (Desktop GPU) are highly sub-optimalcompared to those searched specifically on S5e. These two devices have SRCC of 0.78 and 0.72 with S5e,respectively. (b)(c) SRCC estimation./u1D44B-axis denotes the number of sample architectures we randomly selectper run. We use 1000 runs to calculate the mean and standard deviation. “x-y” means the device pair is(/u1D465, /u1D466.alt).Removing non-Pareto-optimal architectures.We measure the actual latencies of Pareto-optimal architectures (obtained for either the paroxy or adapted proxy device) on the target device,and remove non-Pareto-optimal architectures.5.3 Prerequisite and Checking Latency Monotonicity5.3.1 Prerequisite.We first select a proxy deviced0that preferably has good latency monotonicitywith other target devices. To do so, we can first measure the latencies of a small setAof samplearchitectures (e.g., 30-50 sample architectures in our experiments) on all the target devices, andcalculate the resulting SRCC values for each pair of devices based on the measured latencies.We only need to measure the overall inference latency, unlike building latency predictors whichtypically needs profiling the latency of each operator/layer for thousands of architectures [30,50].Then, we can obtain a SRCC matrix like the one shown in Fig.4(d). Latency measurement ofa small set of sample architectures is also needed to check latency monotonicity (Section5.3.2)and hence is not an extra step. Next, we can choose a proxy device that has high SRCCs with agood number of other devices. Note that proxy device selection does not need to be very precise;instead, even though we choose a proxy device that does not have high SRCCs with many otherdevices, our proposed proxy adaptation technique can still significantly boost the SRCC betweenthe selected proxy device and target devices.For the selected proxy device, we run SOTA hardware-aware NAS to find Pareto-optimal archi-tectures. Specifically, following the one-shot NAS approach [9,43], we first pre-train a supernetand build an accuracy predictor. We then build a latency predictor denoted by/u1D43Fd0(x)based onextensive latency profiling and SOTA methods for latency prediction [19,50]. Finally, we applyevolutionary search [15,43], which quickly produces the Pareto-optimal architecture setPd0byvarying different latency constraints. Once the accuracy predictor and latency predictor are built,running evolutionary search takes at most a few minutes and hence is negligible.5.3.2 Checking latency monotonicity.To check whether strong latency monotonicity is satisfiedbetween the selected proxy device and a target device, we estimate the SRCC based on a small setAof sample architectures and then compare it against a threshold. The latency measurement forthe small set of sample architectures is already performed during the proxy selection process. InFigs.11(b)and11(c), we can see that latency measurement based on a few sample architectures isenough to reliably estimate the SRCC value: e.g., if we set 0.9 as the SRCC threshold, then 30-50Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:14 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Rensample architectures are sufficient. Thus, the cost for measuring latencies for the small setAofsample architectures is negligible compared to building a device-specific latency predictor.5.4 Increasing Latency Monotonicity by Adapting the Proxy Latency PredictorAs illustrated in Fig.11(a), in case of weak latency monotonicity, we cannot re-use the Pareto-optimal architectures found for the proxy device to a new target device. To address this issue, wepropose an efficient transfer learning technique — adapting the proxy’s latency predictor to thetarget device — to boost latency monotonicity.5.4.1 A close look at SOTA latency predictors.We first review three major types of SOTA latencypredictors used in hardware-aware NAS.•Operator-level latency predictor.A straightforward approach is to first profile each operator[10,15] (or each layer [6,39]), and then sum all the operator-level latencies as the end-to-end latencyof an architecture. Specifically, given/u1D43Eoperators (e.g., each with a searchable kernel size andexpansion ratio), we can represent each operator using one-hot encoding: 1 means the respectiveoperator is included in an architecture, and 0 otherwise. Thus, an architecture can be represented asx∈{0,1}/u1D43E∪{1}, where the additional{1}represents the non-searchable part, e.g., fully-connectedlayers in CNN, of the architecture. Accordingly, the latency predictor can be written as/u1D459=w/u1D447x,wherew∈R/u1D43E+1is the operator-level latency vector. This approach needs a few thousands oflatency measurement samples (taking up a few tens of hours) [10,30].•GCN-based latency predictor.To better capture the graph topology of different operators, arecent study [19] uses a graph convolutionary network (GCN) to predict the inference latency fora target device. Concretely, the latency predictor can be written as/u1D459=/u1D43A/u1D436/u1D441Θ(x), whereΘis thelearnt GCN parameter learnt andxis the graph-based encoding of an architecture.•Kernel-level latency predictor.Another recent latency predictor is to use a random forestto estimate the latency for each execution unit (called “kernel”) that captures different compilersand execution flows, and then sum up all the involved execution units as the latency of the entirearchitecture [50]. This approach unifies different DNN frameworks, such as TensorFlow and Onnx,into a single model graph, and hence can predict latencies for models developed using differentframeworks. By encoding an architecture based on the execution units, we can also transformthe latency predictor into a linear one:/u1D459=w/u1D447xwherewis the vector of latencies for differentexecution units andxdenotes the number of each execution unit included in an architecture. Thus,an “execution unit” in [50] is conceptually equivalent to a searchable operator in the operator-levellatency predictor [10].Summary.The three SOTA latency predictors use different encodings/representations for anarchitecture: the encoding based on searchable operators in an operator-level predictor is thesimplest, while the encoding based on fine-grained execution units in a kernel-based predictorhas the most details of an architecture. Despite different prediction accuracies in terms of meansquared errors, they all reflect the latency rankings on an actual device very well and hence aresufficient for serving as the proxy predictor.5.4.2 Adapting the proxy latency predictor.We propose efficient transfer learning to boost theotherwise possibly weak latency monotonicity for a target device.Intuition.Even though two devices have weak latency monotonicity, it does not mean thattheir latencies for each searchable operator are uncorrelated; instead, for most operators, theirlatencies can still be roughly proportional. The reason is that a more complex operator with higherFLOPs that is slower (say, 2x slower than a reference operator) on one device is generally alsoslower on another device, although there may be some differences in the slow-down factor (say,2x vs. 1.9x). This is also the reason why some NAS algorithms use the device-agnostic metric ofProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:15architecture FLOPs as a rough approximation of the actual inference latency [40,41]. If we viewproxy adaptation as a new learning task, this task is highly correlated with the task of building theproxy device’s latency predictor, and such correlation can greatly facilitate transfer learning.Approach.To explain our transfer learning approach, we consider the proxy device’s latencypredictor in a linear form:/u1D43Fd0(x)=w/u1D447x, wherewis the weight andxis the architecture repre-sentation (e.g., one-hot encoding of the searchable operators, penultimate layer output in a neuralnetwork-based predictor,3or encoding of the execution units). We measure the latencies of a smallset of sample architecturesx∈Aon the target device, noting that this step is also needed to checkthe SRCC value and incurs a negligible overhead compared to SOTA approaches (i.e., tens of hoursof latency measurement [29,50]). Then, with the latency measurement samples denoted by(x/u1D456,/u1D466.alt/u1D456),we quickly adapt the proxy device’s latency predictor as/u1D43Fd0,d(x)=/bracketleftbig(/u1D6FCI/u1D447+b/u1D447)◦w/u1D447/bracketrightbigxtailored tothe target device, by solving the following the problem:min/u1D6FC,b1/u1D441/summationdisplay.1/u1D456/barex/barex/bracketleftbig(/u1D6FCI/u1D447+b/u1D447)◦w/u1D447/bracketrightbigx−/u1D466.alt/u1D456/barex/barex2+/u1D706|b|,(3)whereIis the identity vector with all the elements being 1, the operator “◦” denotes the element-wisemultiplication, and/u1D706≥0 is a hyperparameter controlling the weight for the sparsity regulariza-tion term|b|and tuned based on a small validation set of architectures (20 architectures in ourexperiment) split from the sample architecture setA.The interpretation of using Eqn.(3)is as follows. First, the scaling factor/u1D6FCreflects our intuitionthat a more complex operator that is slower on one device is generally also slower on anotherdevice. Second, the sparsity termbaccounts for the fact that the slow-down factors for an operatoron two devices are not necessarily the same.With/u1D43Fd0,d(x), we essentially construct a new virtual proxy device (called adapted proxy orAdaProxy) whose latency is given by/u1D43Fd0,d(x). Here, our goal is to increase the latency monotonicitybetween the new virtual proxy and the target device; we do not need to create a new latencypredictor that produces accurate estimates of the absolute latency values for the target device.If strong latency monotonicity still does not hold between AdaProxy and the target device, wecan incrementally measure the latencies of another small set of sample architectures on the targetdevice and re-solve Eqn.(3). In the majority of our experiments, 50 latency measurements on thetarget device are enough to achieve a strong latency monotonicity. This is negligible compared tothousands of latency profiling and measurements used by SOTA algorithms [10,50].Next, with the adapted latency predictor/u1D43Fd0,d(x)that reflects the architecture latency rankings onthe target deviced, we can run evolutionary search to find the set of Pareto-optimal architectures.5.5 Remove non-Pareto-optimal architecturesUp to this point, we have obtained for the target device an architecture setPd, which is the same asthe proxy (or AdaProxy) device’s Pareto-optimal set. While the latency monotonicity between theproxy (or AdaProxy) device and the target device is strong (e.g., SRCC around 0.9 or higher), it is notperfect. Thus, some architectures inPdmay not be Pareto-optimal for the target device. We removethese architectures based on their actual latencies measured on the target device. Specifically, if anarchitecturex1∈Pdhas a higher latency but the same or similar accuracy compared to anotherarchitecturex2∈Pd, we can removex1fromPd.Finally, if there is a specific latency constraint that is not satisfied by architectures inPd,w ecan re-run evolutionary searches with the assistance of/u1D43Fd0(x), or adapted/u1D43Fd0,d(x)if applicable, to3If the proxy device uses a neural network-based latency predictor, we can also fix the earlier layers while updating theweights in the last few layers, instead of only updating the last single layer.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:16 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Renfurther enlarge the setPd. The key point is that we do not need to go through a very time-consumingprocess to build a new latency predictor specifically for the target device.In summary, the cost for measuring latencies of a small sample set of architectures on the targetdevice for checking latency monotonicity (and, if needed, adapting/u1D43Fd0(x)) is negligible. Therefore,given/u1D45Bdifferent devices, we achieve a total latency evaluation cost ofO(1), which, when combinedwith SOTA NAS algorithms that haveO(1)cost for model training and accuracy evaluation [9,15],successfully keeps the entire NAS cost atO(1).6 EXPERIMENTWe run experiments on multiple devices (including mobile phones, desktop GPU/CPU, ASIC, etc.)on different mainstream search spaces — MobileNet-V2, MobileNet-V3, NAS-Bench-201, and FBNet.6.1 Results on MobileNet-V26.1.1 Setup.We now present the setup for our experiments on MobileNet-V2.Search Space.As in [10], the backbone of our CNN architecture is MobileNet-V2 with multiplier1.3, with the channel number in each block fixed. The search space consists of depth of each stage,kernel size of convolutional layers, and expansion ratio of each block. The depth can be chosenfrom “2, 3, 4”, kernel size can be “3, 5, 7”, and candidate expansion ratios are “3, 4, 6”. There are fivestages whose configurations can be searched.NAS Method.We consider one-shot NAS and use theOnce-For-Allnetwork [9] as a supernetthat has the same search space as ours. We run evolutionary search to find optimal architecturesfor the proxy (or AdaProxy) device. Our parameter settings are: population size is 1000, parent ratiois 0.25, mutation probability is 0.1, mutation ratio is 0.25, and we search for 50 generations giveneach latency constraint. Evolutionary search takes less than 30 seconds for each run. To facilitatethe readers’ understanding, we provide a summary of evolutionary search in AppendixA, whilethe full details can be found in [15,43].Accuracy Predictor.The evolutionary search is assisted with by an accuracy predictor forfast architecture performance evaluation [15,43]. Our accuracy predictor is a neural networkwith four fully-connected layers and updated with 176 samples on top of the predictor used in[9]. The accuracy predictor takes a 128-dimensional feature vector (which is converted from a21-dimensional architecture configuration within the search space) as input. Fig.12(a)comparesthe actual and predicted accuracies, which have a SRCC of 0.903 and root mean squared errorof 1.11%. The performance of our accuracy predictor is in line with the existing NAS literaturefor MobileNet-based models [9]. As a result, the imperfection in the accuracy predictor explainswhy a strong, but not perfect, latency monotonicity (e.g., SRCC>0.9) is enough for our one-proxyapproach to find Pareto-optimal architectures for a new target device.Latency Predictor.We build device-specific latency predictors in the MobileNet-V2 space forour four devices listed in Table2. Specifically, for each sample architecture, we profile the averagelatency of 1000 runs. We use a single thread for running the TensorFlow Lite interpreter by default.To show the accuracy of our latency predictors, we sample a few additional models and measuretheir actual latency on our four mobile devices. The comparison between actual and predictedlatency is shown in Fig.12, with a root mean squared error of 2.88ms on S5e, 4.69ms on TabA,3.72ms on Lenovo, and 59.18ms on the low-end Vankyo. As corroborated by prior studies [15,43,47],our result shows that the predicted average latency is almost identical to the actual value.We choose S5e mobile phone as the proxy device. Our results of using other mobile devices asthe proxy are nearly the same because S5e and the other mobile devices have SRCC close to 1.0(Fig.4(b)), i.e., these mobile devices are almost viewedonedevice based on Proposition3.1.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:17 (a) Accuracy (b) S5e (c) TabA (d) Lenovo (e) VankyoFig. 12. (a) Actual vs. predicted accuracy. The root mean squared error is 1.11%, and SRCC is 0.903. (b)(c)(d)(e)Measured average inference latency versus predicted latency based on latency lookup tables. The root meansquared errors for S5e, TabA, Lenovo, and Vankyo are 2.88ms, 4.69ms, 3.72ms, and 59.18ms respectively.Architecture Evaluation.For a searched architecture, the actual model performance is mea-sured. We evaluate accuracies on the ImageNet validation dataset [16], which consists of 50000images in 1000 classes. Accuracy evaluation is run on Google Colab equipped with Tesla T4.6.1.2 Baselines.We consider the following baselines for hardware-aware NAS.#1: Building a Latency Predictor for Each Target Device [9,15,19,43].For each device,we use the same evolutionary search described in Section6.1.1. While the accuracy predictor isreusable across devices and evolutionary search is quick, measuring latencies of thousands ofarchitectures to build a device-specific latency predictor (as done in the existing hardware-awareNAS [9,19,43]) is time-consuming. Thus, this approach has a total cost ofO(/u1D45B)for/u1D45Bdevices [5,15].#2: Heuristic Model Scaling.There are different ways to scale a CNN to meet different latencyconstraints: e.g., adapt the network depth and/or width [40,41]. Since the number of channels inour backbone network is fixed, we heuristically scale the depth of a Pareto-optimal architecture onthe proxy device by increasing (for higher accuracy) or reducing (for smaller latency) the depthby up to two blocks, and transfer the scaled architecture to new target devices. This approach hasO(1)complexity.The two baselines highlight that the existing hardware-aware NAS either achieves Pareto opti-mality but has aO(/u1D45B)latency evaluation cost (Baseline #1), or keeps the latency evaluation costatO(1)but loses Pareto optimality (Baseline #2). By contrast, our approach has aO(1)latencyevaluation cost in total, while preserving Pareto optimality.6.1.3 Performance of Searched Architectures.We compare the measured top-1 accuracy on Ima-geNet versus average inference latency of searched architectures on each target device.Mobile Devices.Fig.13shows the result for three different target mobile devices, all using S5eas the proxy device. The SRCC values between S5e and the target devices are all greater than orequal to 0.98 (Fig.4(b)). We see that the architectures searched on S5e can result in almost thesame(/u1D44E/u1D450/u1D450/u1D462/u1D45F/u1D44E/u1D450/u1D466.alt, /u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt)tradeoff as device-specific NAS, but the additional latency evaluationcost for each target device is negligible. Further, we see that despite itsO(1)complexity, heuristicadaptation (baseline #2) can result in really bad architectures without performance guarantees.Non-Mobile Devices.We show the results in Fig.14for non-mobile devices. As these deviceshave low SRCC values with our S5e proxy, we use Eqn.(3)to create an AdaProxy device, which hasSRCC of close to 0.9 or higher with the target devices. The details of the proxy adaptation process,including the SRCC values before and after proxy adaptation, are available in AppendixB.2.The top row shows the architectures found by evolutionary search. We see that with a lowSRCC (around 0.7-0.8), the architectures searched on the proxy device are not Pareto-optimal onthe target devices. With proxy adaptation, the SRCC increases significantly, and the architecturessearched on the AdaProxy device are almost the same as those directly searched on the targetProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:18 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Ren (a) TabA as target device (b) Lenovo as target device (c) Vankyo as target deviceFig. 13. Results on three different mobile target devices, using S5e as proxy device. “Target” is the baseline#1, “Proxy” means using our approach with S5e as the proxy device, and “Scaling” means heuristic scalingapplied to S5e’s one Pareto-optimal architecture. (a) Tesla T4 (b) Quadro M4000 (c) Intel i7-4770HQ (d) Tesla T4 (e) Quadro M4000 (f) Intel i7-4770HQFig. 14. Results for non-mobile target devices with the default S5e proxy and AdaProxy. The top row showsthe evolutionary search results with real measured accuracies, and the bottom row shows the exhaustivesearch results based on 10k random architectures and predicted accuracies.device. This highlights the need of strong latency monotonicity between the proxy and the targetdevice, as well as the effectiveness of our proposed proxy adaptation technique to boost the latencymonotonicity. The heuristic scaling approach (Baseline #2) performs even worse than directly usingthe architectures searched on the proxy device, and hence are omitted.The bottom row shows exhaustive search results out of 10k randomly selected architectures,using the predicted accuracies as the true values. This is essentially considering a semi-oracleNAS process (on a small space of 10k architectures) assuming a perfect accuracy predictor. Asa result, compared to evolutionary search using an imperfect accuracy predictor, it may have aProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:19 Edge GPUEdge TPURaspi4Pixel3EyerissFPGA1.0 0.6 0.65 0.62 0.47 0.610.6 1.0 0.44 0.46 0.58 0.440.65 0.44 1.0 0.96 0.67 0.880.62 0.46 0.96 1.0 0.72 0.870.47 0.58 0.67 0.72 1.0 0.730.61 0.44 0.88 0.87 0.73 1.0 0.00.20.40.60.81.0ProxyAdaProxy Fig. 15. SRCC for various devices in the NAS-Bench-201 search space on CIFAR-10. Pixel3 is our proxy device.SRCC values boosted with AdaProxy are highlighted.more stringent requirement on the SRCC between the target device and the proxy (or AdaProxy)device. We see that, due to the low SRCC, the architectures found by using the proxy device’slatency predictor may not overlap with the oracle’s Pareto-optimal boundary. In fact, some of theproxy’s optimal architectures can perform very poorly on the target device. For example, Fig.14(d)shows that S5e’s optimal architectures are highly sub-optimal on Tesla T4. On the other hand,with improved SRCC, the architectures found by using the AdaProxy device’s latency predictorpreserves Pareto optimality very well on the target devices, again demonstrating the necessityand effectiveness of our proxy adaptation technique in the presence of weak latency monotonicitybetween the default proxy and target device.Additional results, including settings for proxy adaptation and comparison of exhaustivelysearched architectures on other devices, can be found in AppendixB.2.6.2 Results on NAS-Bench-201, FBNet, and nn-MeterWe now evaluate our approach on the recently released latency datasets for six different devices onNAS-Bench-201 and FBNet spaces [29], additional devices on NAS-Bench-201 [19], as well as fourdevices on nine different search spaces [49].We first consider the latency results on the NAS-Bench-201 search space using the CIFAR-10 dataset [29]. Since NAS-Bench-201 represents a simple architecture space with only around15k architectures, we consider an oracle NAS process via exhaustive search. Thus, compared toevolutionary search using an imperfect accuracy predictor, the oracle NAS process can have amore stringent requirement on the SRCC between the target device and the proxy (or AdaProxy)device. We use Pixel3 as the default proxy which, as shown in Fig.15, does not have strong latencymonotonicity with the target devices (except for Raspi4). By proxy adaptation, we can significantlyboost the latency monotonicity, increasing the SRCC values to 0.9 or higher.Next, Fig.16shows the optimal architectures found by using the proxy device’s latency predictor,the adapted latency predictor, and the oracle, respectively. We can see that due to the pre-adaptationlow SRCC values between the proxy device Pixel3 and the target devices, only a few architecturesthat are optimal for the proxy are still optimal for the target devices after architecture removal(Section5.5). Moreover, even the proxy’s remaining optimal architectures can be far from optimalityon the target device. For example, Fig.16(a)shows that some of Pixel3’s optimal architectures deviatefrom the Pareto-optimal boundary on the edge GPU. By using proxy adaptation and increasing theSRCC values, the AdaProxy’s optimal architectures can be efficiently transferred to target devicesProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:20 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Ren (a) Edge GPU (b) Edge TPU (c) Eyeriss (d) FPGA (e) Raspberry Pi 4Fig. 16. Exhaustive search results for different target devices on NAS-Bench-201 architectures (CIFAR-10dataset) [17,29]. Pixel3 is the proxy.while preserving optimality. The proxy device Pixel3 has a high SRCC of 0.96 with Raspi4, evenwithout proxy adaptation. Thus, as shown in Fig.16(e), the optimality of Pixel3’s architecturespreserve very well on Raspi4. All these demonstrate the importance of strong monotonicity betweenthe proxy and the target device, as well as the effectiveness of our proxy adaptation technique. forscalable hardware-aware NAS.Additional results, including the details of proxy adaptation and results on other search spaces,are available in AppendixB. These results further validate our approach and highlight the practicalfeasibility of using only one proxy device for scalable hardware-aware NAS.7 RELATED WORKThe huge search space for neural architectures presents significant challenges (see [20,30,33,40,41,45,52] and references therein). To minimize the cost of training numerous architectures, one-shotNAS uses a super net that includes all the weights for candidate architectures [4,5,9,23,34]. Inrecent years, transformer-based vision algorithms have also been emerging and inspired studiestransformer search to optimize the performance [18], but it is orthogonal to NAS that we focus on.Importantly, fast evaluation of accuracy and inference latency to rank different architectures iscrucial for efficient hardware-aware NAS [20,30,33,40,41,45,52]. To reduce the cost of accuracyevaluation, the prior studies have considered reinforcement learning with accuracy evaluatedbased on a small proxy dataset [52], Bayesian optimization-based NAS (to reduce the number ofsampled and evaluated architectures) [35], generative approaches [27], one-shot or few-shot NAS[4,9,51], and NAS assisted with an accuracy predictor [15,43]. More recently, ranking architectureaccuracies based on easily-computable proxy metrics has also been studied: e.g., computing a modelscore based on a small minibatch of training data [1], and analyzing the neural tangent kernel(NTK) as well as the number of linear regions in the input space [11].To expedite inference latency evaluation, the SOTA hardware-aware NAS has mainly resortedto device-specific latency predictors [5,10,13,15,19,34,43,47]. Nonetheless, building even onelatency predictor incurs a non-trivial upfront cost. Thus, [19,29,50] have recently released latencydatasets and predictors, but only for a few devices due to the prohibitive time cost.Given many diverse devices, scalability of latency evaluation is critically important. A straight-forward approach is to build a meta latency predictor that incorporates hardware features asadditional input [28,32]. Nonetheless, significant drawbacks exist for this approach: (1) numerouslatency measurements on a large number of heterogeneous devices are required in advance formeta-training; (2) there is a fundamental challenge for provably-good generalization to new unseentarget devices that deviate significantly from the training device pool (i.e., out-of-distribution); and(3) the process of meta-learning and adaptation to new devices involves complex hyperparametertuning, adding considerable uncertainties to the latency prediction performance. For example, inorder to cover 24 devices with good generalization performance in the experiment, up to 18 hetero-geneous devices are used for meta-training in which 900/4000 architecture latencies are collectedProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:21for each device on the NAS-Bench-201/FBNet search space, while only the remaining 6 devices areused for testing [28]. Crucially, these meta latency predictors [28,32] aim at producing accuratelatency prediction with low prediction errors, which adds further challenges to the prediction modelbut isunnecessaryfor hardware-aware NAS. By contrast, what matters most is the architecturelatency ranking on a target device, for which sophisticated (meta) latency predictors may not offersubstantial benefits. We show both theoretically and empirically that one proxy device that hasstrong latency monotonicity with target devices (after proxy adaptation if needed) is enough forhardware-aware NAS, truly keeping the total latency evaluation cost atO(1).Considering a synthetic latency metric aggregated over a few devices, simultaneous multi-deviceNAS [13] may not meet the latency constraint or achieve Pareto optimality for any involved device.Heuristic scaling approaches, e.g., by changing the number of layers and/or channels [36,40,41],can limit the architecture space and hence reduce both accuracy and latency evaluation costs,but they may also miss Pareto-optimal architectures because of their coarse scaling granularity.Architecture FLOPs is a device-agnostic proxy metric, but it cannot accurately reflect the truelatency ranking of architectures on real devices [15,29,40]. While various proxy metrics (e.g., NTK[11]) have been considered for accuracy evaluation, our approach of using one proxy device isthe first to address a complementary challenge of fast latency evaluation in the presence of manydiverse devices.8 CONCLUSIONIn this paper, we efficiently scale up hardware-aware NAS for diverse target devices. Concretely,we demonstrate latency monotonicity among different devices, and propose to use just one proxydevice’s latency predictor for NAS. When latency monotonicity is not satisfied between the proxydevice and the target device, we propose an efficient transfer learning technique — adapting theproxy’s latency predictor to the target device — to boost latency monotonicity. Overall, our approachresults in a much lower total cost of latency evaluation, yet without losing Pareto optimality. Forevaluation, we conduct experiments with different devices of different platforms on mainstreamsearch spaces, including MobileNet-V2, MobileNet-V3, NAS-Bench-201 and FBNet spaces.ACKNOWLEDGEMENTBingqian Lu, Jianyi Yang, and Shaolei Ren are supported in part by the NSF under grants CNS-1910208 and CNS-2007115. Yiyu Shi is supported in part by the NSF under grants CNS-1822099 andCNS-2122220. We are grateful to the anonymous reviewers and our shepherd, Sergey Blagodurov,for their valuable comments.REFERENCES[1]Mohamed S Abdelfattah, Abhinav Mehrotra, Łukasz Dudziak, and Nicholas Donald Lane. Zero-cost proxies forlightweight NAS. InICLR, 2021.[2] AI-Benchmark. Performance of mobile phones.http://ai-benchmark.com/ranking_detailed.html.[3] Haldun Akoglu. User’s guide to correlation coefficients.Turkish Journal of Emergency Medicine, 18(3):91 – 93, 2018.[4]Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifyingone-shot architecture search. InICML, 2018.[5]Gabriel Bender, Hanxiao Liu, Bo Chen, Grace Chu, Shuyang Cheng, Pieter-Jan Kindermans, and Quoc V. Le. Canweight sharing outperform random architecture search? An investigation with TuNAS. InCVPR, 2020.[6]Ermao Cai, Da-Cheng Juan, Dimitrios Stamoulis, and Diana Marculescu.NeuralPower: Predict and deploy energy-efficient convolutional neural networks. InACML, 2017.[7] Han Cai. Latency lookup tables of mobile devices.https://file.lzhu.me/hancai/.[8] Han Cai. Latency lookup tables of mobile devices and GPUs.https://file.lzhu.me/LatencyTools/tvm_lut/.[9]Han Cai, Chuang Gan, and Song Han. Once for all: Train one network and specialize it for efficient deployment. InICLR, 2019.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:22 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Ren[10]Han Cai, Ligeng Zhu, and Song Han. ProxylessNas: Direct neural architecture search on target task and hardware. InICLR, 2019.[11]Wuyang Chen, Xinyu Gong, and Zhangyang Wang. Neural architecture search on ImageNet in four GPU hours: Atheoretically inspired perspective. InICLR, 2021.[12]Hsin-Pai Cheng, Tunhou Zhang, Yukun Yang, Feng Yan, Harris Teague, Yiran Chen, and Hai Li. MSNet: Structuralwired neural architecture search for internet of things. InICCV Workshop, 2019.[13]Grace Chu, Okan Arikan, Gabriel Bender, Weijun Wang, Achille Brighton, Pieter-Jan Kindermans, Hanxiao Liu, BerkinAkin, Suyog Gupta, and Andrew Howard. Discovering multi-hardware mobile models via architecture search, 2020.[14]Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Bichen Wu, Zijian He, Zhen Wei, Kan Chen, Yuandong Tian, Matthew YuYu, Peter Vajda, and Joseph E. Gonzalez. Fbnetv3: Joint architecture-recipe search using predictor pretraining. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16276–16285, 2021.[15]Xiaoliang Dai, Peizhao Zhang, Bichen Wu, Hongxu Yin, Fei Sun, Yanghan Wang, Marat Dukhan, Yunqing Hu, YimingWu, Yangqing Jia, et al. ChamNet: Towards efficient network design through platform-aware model adaptation. InCVPR, 2019.[16]Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database.InCVPR, 2009.[17]Xuanyi Dong and Yi Yang. NAS-Bench-201: Extending the scope of reproducible neural architecture search. InICLR,2020.[18]Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, MostafaDehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth16x16 words: Transformers for image recognition at scale. InICLR, 2021.[19]Lukasz Dudziak, Thomas Chau, Mohamed S. Abdelfattah, Royson Lee, Hyeji Kim, and Nicholas D. Lane. BRP-NAS:Prediction-based nas using GCNs. InNeurIPS, 2020.[20]Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey.Journal of MachineLearning Research, 20(55):1–21, 2019.[21]Ian Goodfellow, Yoshua Bengio, and Aaron Courville.Deep Learning. MIT Press, 2016.http://www.deeplearningbook.org.[22]Google. Tensorflow lite image classification app.https://www.tensorflow.org/lite/models/image_classification/overview.[23]Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shotneural architecture search with uniform sampling. InECCV, 2020.[24] Mark Hill and Vijay Janapa Reddi. Gables: A roofline model for mobile SoCs. InHPCA, 2019.[25]Andrey Ignatov, Radu Timofte, Andrei Kulik, Seungsoo Yang, Ke Wang, Felix Baum, Max Wu, Lirong Xu, and LucVan Gool. Ai benchmark: All about deep learning on smartphones in 2019. InICCVW, 2019.[26]Weiwen Jiang, Lei Yang, Sakyasingha Dasgupta, Jingtong Hu, and Yiyu Shi. Standing on the shoulders of giants:Hardware and neural architecture co-search with hot start.IEEE Transactions on Computer-Aided Design of IntegratedCIrcuits and Systems, 2020.[27] Sheng-Chun Kao, Arun Ramamurthy, and Tushar Krishna. Generative design of hardware-aware dnns. 2020.[28]Hayeon Lee, Sewoong Lee, Song Chong, and Sung Ju Hwang. HELP: hardware-adaptive efficient latency predictor fornas via meta-learning. InNeurIPS, 2021.[29]Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran You, Qixuan Yu, Yue Wang, Cong Hao, andYingyan Lin. HW-NAS-Bench: Hardware-aware neural architecture search benchmark. InICLR, 2021.[30]Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, JonathanHuang, and Kevin Murphy. Progressive neural architecture search. InECCV, 2018.[31] Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. InICLR, 2019.[32]Bingqian Lu, Jianyi Yang, and Shaolei Ren. Poster: Scaling up deep neural network optimization for edge inference. InIEEE/ACM Symposium on Edge Computing (SEC), 2020.[33]Qing Lu, Weiwen Jiang, Xiaowei Xu, Yiyu Shi, and Jingtong Hu. On neural architecture search for resource-constrainedhardware platforms. InICCAD, 2019.[34]Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang. A comprehensivesurvey of neural architecture search: Challenges and solutions.ACM Comput. Surv., 54(4), May 2021.[35]Binxin Ru, Xingchen Wan, Xiaowen Dong, and Michael Osborne. Neural architecture search using Bayesian optimisa-tion with weisfeiler-lehman kernel. InICLR, 2021.[36]Manas Sahni, Shreya Varshini, Alind Khare, and Alexey Tumanov. Comp{ofa} – compound once-for-all networks forfaster multi-platform deployment. InInternational Conference on Learning Representations, 2021.[37]Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Invertedresiduals and linear bottlenecks. InCVPR, 2018.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:23[38]Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James T. Kwok, and Tong Zhang. Multi-objective neural srchitecture searchvia predictive network performance optimization.arXiv preprint arXiv:1911.09336, 2019.[39]Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, and Diana Marculescu.Single-path NAS: Designing hardware-efficient ConvNets in less than 4 hours. InECML-PKDD, 2019.[40]Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V. Le. MnasNet:Platform-aware neural architecture search for mobile. InCVPR, 2019.[41]Mingxing Tan and Quoc Le. EfficientNet: Rethinking model scaling for convolutional neural networks. InICML, 2019.[42]Hanrui Wang, Zhanghao Wu, Zhijian Liu, Han Cai, Ligeng Zhu, Chuang Gan, and Song Han. HAT: Hardware-awaretransformers for efficient natural language processing. InACL, 2020.[43]Tianzhe Wang, Kuan Wang, Han Cai, Ji Lin, Zhijian Liu, Hanrui Wang, Yujun Lin, and Song Han. APQ: Joint searchfor network architecture, pruning and quantization policy. InCVPR, 2020.[44]Samuel Williams, Andrew Waterman, and David Patterson. Roofline: an insightful visual performance model formulticore architectures.Communications of the ACM, 2009.[45]Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, YangqingJia, and Kurt Keutzer. FBNet: Hardware-aware efficient ConvNet design via differentiable neural architecture search.InCVPR, 2019.[46]Carole-Jean Wu, David Brooks, Kevin Chen, Douglas Chen, Sy Choudhury, Marat Dukhan, Kim Hazelwood, EldadIsaac, Yangqing Jia, Bill Jia, Tommer Leyvand, Hao Lu, Yang Lu, Lin Qiao, Brandon Reagen, Joe Spisak, Fei Sun, AndrewTulloch, Peter Vajda, Xiaodong Wang, Yanghan Wang, Bram Wasti, Yiming Wu, Ran Xian, Sungjoo Yoo, and PeizhaoZhang. Machine learning at Facebook: Understanding inference at the edge. InHPCA, 2019.[47]Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, and Hartwig Adam.Netadapt: Platform-aware neural network adaptation for mobile applications. InECCV, 2018.[48]Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Bender, Pieter-Jan Kindermans, Mingxing Tan, Thomas Huang, XiaodanSong, Ruoming Pang, and Quoc Le. Bignas: Scaling up neural architecture search with big single-stage models. InECCV, 2020.[49]Li Lyna Zhang, Shihao Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, and Yunxin Liu.https://github.com/microsoft/nn-meter.[50]Li Lyna Zhang, Shihao Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, and Yunxin Liu. nn-meter: towardsaccurate latency prediction of deep-learning model inference on diverse edge devices. InMobiSys, 2021.[51]Yiyang Zhao, Linnan Wang, Yuandong Tian, Rodrigo Fonseca, and Tian Guo. Few-shot neural architecture search. InICML, 2021.[52] Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. InICLR, 2017. Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:24 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei RenAppendixIn the appendix, we provide a summary of evolutionary search used in our experiment andadditional experimental results.A SUMMARY OF EVOLUTIONARY SEARCHA.1 DescriptionTo facilitate the readers’ understanding, we provide a summary of the widely-used evolutionarysearch process for NAS, taking the MobileNet-V2 search space as example. More details of usingevolutionary search in hardware-aware NAS can be found in [15,43].In our experiment, the total number of searchable blocks is 21, divided into five stages plus thelast convolutional layer. Thus, we can use two 21-dimension vectors to represent the kernel sizeand expansion ratio of each block, respectively, and one 5-dimension vector to denote the depth ofeach stage. The depth can be chosen from “2, 3, 4”, the kernel size can be “3, 5, 7”, and the candidateexpansion ratios are “3, 4, 6”. Each individual member in evolutionary search consists of these threevectors. Here is an example individual: {“kernel_size”: [5, 3, 5, 7, 5, 3, 5, 3, 7, 7, 5, 7, 5, 3, 3, 5, 5, 3, 5,5, 3], “expansion_ratio”: [3, 3, 4, 6, 4, 3, 4, 6, 4, 3, 6, 4, 3, 4, 3, 4, 4, 3, 3, 4, 3], “depth”: [2, 2, 2, 2, 3]}.To run evolutionary search, we first randomly sample the initial population of individualsaccording to the population size. Next, we evaluate thefitnessof each individual in the population,where the fitness function is defined as:(/u1D461−1)·/u1D44E/u1D450/u1D450/u1D462/u1D45F/u1D44E/u1D450/u1D466.alt+/u1D461·/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.alt(4)where/u1D461∈[0,1]is the weight parameter to balance the tradeoff between accuracy and latencyof each individual model, and/u1D44E/u1D450/u1D450/u1D462/u1D45F/u1D44E/u1D450/u1D466.altand/u1D459/u1D44E/u1D461/u1D452/u1D45B/u1D450/u1D466.altare predicted values given by the accuracyand latency predictors, respectively. By varying/u1D461∈[0,1], we can obtain a set of Pareto-optimalarchitectures.For each evolutionary search iteration, we select the fittest individuals as parents for reproduction,which will survive in the next generation and also breed new individuals through crossover. Forexample, if our population size is 1000 and the parent ratio is 0.25, we have 250 fittest individualsas parents. Then, we randomly select a pair of parents each time for crossover and generate a child.Within the crossover process, each element in the child’s vector is chosen randomly from one ofthe parents’. Also, based on the mutation ratio setting, part of the offsprings will further performmutation operations. For example, with mutation ratio 0.25 and mutation probability 0.1, 250 outof 750 children have a possibility of 0.1 to mutate. If a child is chosen to mutate, its kernel size,expansion ratio, and depth will be randomly sampled out of all the possible values for exploration.After crossover and mutation, we have a new population consisting of parents, bred children, andmutated children. Next, the fittest individuals are selected as new parents for next iteration. Theabove crossover and mutation steps will be repeated for the maximum evolutionary search iterationnumber.A.2 Evolutionary Search HyperparametersTypically, the evolutionary search is not very sensitive against different hyperparameter settings,provided that the population size and iteration number are large enough and that there is adequateexploration. In Section6, our hyperparameter settings are: population size is 1000, parent ratio is0.25, mutation probability is 0.1, mutation ratio is 0.25, and we search for 50 generations given eachlatency constraint. We denote these settings as “EA#1". In Fig.17, we change the hyperparametersto “EA#2": population size is 500, parent ratio is 0.3, mutation probability is 0.2, mutation ratio isProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:25 Fig. 17. Pareto-optimal models searched on Samsung Galaxy S5e with different parameter settings forevolutionary search. “EA#1" denotes the parameter setting that population size is 1000, parent ratio is 0.25,mutation probability is 0.1 and mutation ratio is 0.25; while “EA#2" represents that population size is 500,parent ratio is 0.3, mutation probability is 0.2 and mutation ratio is 0.4.0.4, and run evolutionary search again on Samsuang Galaxy S5e. The results in Fig.17show thatthe searched Pareto-optimal models are almost identical to the original ones (“EA#1").B ADDITIONAL RESULTSIn this section, we present additional experimental results, including the demonstration of latencymonotonicity based on third-party latency results and the effectiveness of our transfer learningtechnique in various mainstream search spaces.B.1 Latency MonotonicityTo corroborate our own measurement and finding in Section4, we examine latency monotonicityby leveraging third-party latency predictors and measurements for other devices.B.1.1 Results on Predicted Latencies.We obtain latency lookup tables for four mobile devices [7]:GooglePixel1,Pixel2, Samsung GalaxyS7edge,Note8in the MobileNet-V2 space with stagewidths “32, 16, 24, 48, 80, 104, 192, 320, 1280". In addition, we obtain from [8] latency lookup tablesfor four cross-platform devices (used in [10]): GooglePixel1,Pixel2,TITANXp,E5-2640 v4 in theMobileNet-V2 space with different stage widths “32, 16, 24, 40, 80, 96, 192, 320, 1280" and measuredwith the MKL-DNN library. Note that latency predictors are very accurate (e.g., with an root meansquared error of less than 1% of the average) [5,10,15,47]. (a) (b) (c) (d)Fig. 18. Latency monotonicity on third-party latency predictors [7,8]. (a)(b) and (c)(d) use different searchspaces and DNN acceleration libraries.We randomly sample 10k models in each search space with variable depths of “2, 3, 4” in eachstage, variable filter sizes of “3, 5, 7” in each convolutional layer, and variable expansion ratios of “3,4, 6” in each block. We show the results in Fig.18, which are in line with our experiments: latencyProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:26 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Ren (a) GoogLeNet (b) MnasNet (c) MobileNet-V2 (d) GoogLeNet (e) MnasNet (f) MobileNet-V2 (g) ResNet (h) SqueezeNet (i) VGG (j) ResNet (k) SqueezeNet (l) VGGFig. 19. Latency results of 2000 models on CortexA76 CPU, Adreno 640 GPU, Adreno 630 GPU, and MyriadVPU, available in the dataset [49]. Search spaces: (a)(d) GoogLeNet, (b)(e) MnasNet, (c)(f) MobileNet-V2,(g)(j) ResNet, (h)(k) SqueezeNet, and (i)(l) VGG.monotonicity among mobile devices is strong (>0.95), while FLOP-latency ranking correlation formobile devices is also quite strong but cross-platform latency monotonicity degrades.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:27B.1.2 Results on Measured Latencies.We provide more evidence of latency monotonicity acrossdifferent devices, and even across different DNN frameworks, using the nn-Meter results [49,50].Specifically, Fig.19shows the measured latencies and cross-device SRCCs in six different searchspaces. We see that cross-device latency monotonicity strongly exists.B.2 Results on MobileNet-V2Search Space.Our backbone is MobileNet-V2 with multiplier 1.3, with the channel number ineach block fixed. As shown in Fig.20, The search space consists of depth of each stage, kernel sizeof convolutional layers, and expansion ratio of each block. The depth can be chosen from “2, 3,4”, kernel size can be “3, 5, 7”, and candidate expansion ratios are “3, 4, 6”. There are five stageswhose configurations can be searched, plus the kernel size and expansion ratio of the last invertedresidual block.                                                                   Fig. 20. MobileNet-V2 search space and architectural encoding.Proxy Adaptation.We use S5e as the default proxy device. Fig.21shows the original SRCCbetween S5e and desktop CPUs and GPUs, which are all below 0.8. We observe from Section5thatSRCC of <0.8 is not enough to find Pareto-optimal architectures on the target device. Thus, in theabsence of strong latency monotonicity between the default proxy device and the target device,proxy adaptation is necessary.In the MobileNet-V2 search space, we have 21 searchable blocks in total, whose configurationscan each be chosen out of nine kernel size and expansion ratio combinations or none (i.e., the blockis not selected with a reduced stage depth). Thus, to represent an architecture, we simply use a9-dimension one-hot vectorx/u1D44Fto encode the specification of each block. Given the proxy device’slatency predictor as/u1D43Fd0(x)=w/u1D447xbuilt a priori, we collect the latencies of 80 sampled architectureson the target device, which are further split into 60 for training and 20 for validation. For i7-4790and i7-4770HQ, we only need latencies of 30 sampled architectures for training. Next, by solvingEqn.(3), we obtain the AdaProxy device’s latency predictor adapted to the target device, resultingin a significantly increased SRCC. Therefore, with the new latency predictor, we can quickly obtainPareto-optimal architectures for the AdaProxy device, which are also very close to optimum forthe target device (after removal of non-Pareto optimal architectures as specified in Section5).Results.Even without proxy adaptation, our results in Section4show that latency monotonicityamong mobile devices (and between S5e and FPGA) is very strong. Here, we show the latencymonotonicity between our proxy and desktop GPUs/CPUs, both with and without proxy adaptation.We see from Fig.21that the weak monotonicity can be significantly increased by using proxyadaptation. Thus, for a new target device that has a low SRCC with our default proxy device, we cansimply use the AdaProxy device’s latency predictor instead of profiling thousands of architecturesand building a new one.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:28 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Ren S5ei7-4790i7-4770HQE5-2673 v3Tesla T4Tesla K80Quadro M4000Quadro P50001.0 0.78 0.8 0.740.720.760.730.680.78 1.0 0.970.960.82 0.8 0.770.690.8 0.97 1.0 0.930.78 0.8 0.790.670.740.960.93 1.0 0.850.820.770.720.720.820.780.85 1.0 0.950.880.920.76 0.8 0.8 0.820.95 1.0 0.970.930.730.770.790.770.880.97 1.0 0.890.680.690.670.720.920.930.89 1.0 0.00.20.40.60.81.0ProxyS5eDesktop CPUsDesktop GPUsAdaProxy Fig. 21. SRCC for various devices in the MobileNet-V2 space. S5e is the default proxy device. SRCC valuesboosted by AdaProxy are highlighted. (a) Intel i7-4790 (b) E5-2673 v3 (c) Tesla K80 (d) Quadro P5000Fig. 22. Exhaustive search results based on 10k random architectures and predicted accuracies, for non-mobiletarget devices with the default S5e proxy and AdaProxy. SRCC values before and after proxy adaptation areshown in Fig.21.In Section6, we already show the architecture performances for mobile target devices and someGPU/CPU devices. Now, we show the architecture performances for the remaining GPU/CPUdevices in Fig.22. We see that, due to the low SRCC, the architectures found by using the defaultproxy device’s latency predictor may not overlap well with the oracle’s Pareto-optimal boundary.On the other hand, with improved SRCC, the architectures found by using the AdaProxy device’slatency predictor preserves Pareto optimality very well on the target devices. Again, this showsthat our proposed transfer learning approach to boost the latency monotonicity is necessary andeffective. For the devices in Fig.22(d), we use 80 sampled architectures (50 for training, and 30for validation and tuning/u1D706) to construct AdaProxy. Note that the results are based on exhaustivesearch out of 10k randomly selected architectures, using the predicted accuracies as the true values.This is essentially considering a semi-oracle NAS process (on a small space of 10k architectures)assuming a perfect accuracy predictor. In other words, compared to evolutionary search (whoseaccuracy predictor itself is also not perfect), it has a more stringent requirement on the SRCCbetween the target device and the proxy (or AdaProxy) device. Thus, our approach works well evenin this challenging case.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:29B.3 Results on NAS-Bench-201Search Space.NAS-Bench-201 adopts a fixed cell search space [17]. Each searched cell is repre-sented as a densely-connected directed acyclic graph (DAG), which is then stacked together with apre-defined skeleton to construct an architecture. Specifically, as shown in Fig.23, the search spaceconsiders four nodes and five representative operation candidates for the operation set, and variesthe feature map sizes and dimensions of the final fully-connected layer to handle different datasets(i.e., CIFAR-10, CIFAR-100, and ImageNet16-120).      Fig. 23. NAS-Bench-201 search space and architectural encoding.Proxy Adaptation.We have four searchable nodes in total, the operation for each of whichcan be chosen from five candidates. Thus, we can use a 5-dimension one-hot vector to encodethe specification of each node, although more advanced representation (e.g., graph-based [19]) isalso applicable. Pixel3 is the default proxy device. Given the proxy device’s latency predictor as/u1D43Fd0(x)=w/u1D447xbuilt a priori, the training in transfer learning is based on measured latencies of 40sampled architectures for the edge TPU and edge GPU, and 20 sampled architectures for Eyerissand FPGA, respectively. In addition, validation uses another 20 sampled architectures for tuningthe hyperparameter. Next, by solving Eqn.(3), we obtain the AdaProxy device’s latency predictoradapted to the target device, resulting in a significantly increased SRCC. We show in Fig.24thelatency monotonicity in terms of SRCC values, both with and without proxy adaptation. We seethat the weak monotonicity can be significantly increased by using proxy adaptation.Results.Considering the CIFAR-100 dataset, Fig.25shows optimal architectures found by usingthe proxy device’s latency predictor, the adapted latency predictor, and the oracle, respectively. Wecan see that due to the pre-adaptation low SRCC values between the proxy device Pixel3 and thetarget devices, only a few architectures that are optimal for the proxy are still optimal for the targetdevices after architecture removal. Moreover, even the proxy’s remaining optimal architecturescan be far from optimality on the target device. For example, Fig.25(a)shows that some of Pixel3’soptimal architectures deviate from the Pareto-optimal boundary on the edge GPU. By using proxyadaptation and increasing the SRCC values, the AdaProxy’s optimal architectures can be efficientlytransferred to target devices while preserving optimality. The proxy device Pixel3 has a high SRCCof 0.96 with Raspi4, even without proxy adaptation. Thus, as shown in Fig.25(e), the optimalityof Pixel3’s architectures preserve very well on Raspi4. All these demonstrate the importance ofstrong monotonicity between the proxy and the target device, as well as the effectiveness of ourproxy adaptation technique, for hardware-aware NAS with a total latency evaluation cost ofO(1).The same observation is also made in Fig.26for the ImageNet16-120 dataset.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:30 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Ren Edge GPUEdge TPURaspi4Pixel3EyerissFPGA1.0 0.6 0.64 0.62 0.47 0.610.6 1.0 0.44 0.46 0.57 0.430.64 0.44 1.0 0.96 0.67 0.890.62 0.46 0.96 1.0 0.72 0.870.47 0.57 0.67 0.72 1.0 0.730.61 0.43 0.89 0.87 0.73 1.0 0.00.20.40.60.81.0 1.0 0.63 0.67 0.62 0.47 0.520.63 1.0 0.49 0.48 0.57 0.380.67 0.49 1.0 0.91 0.67 0.820.62 0.48 0.91 1.0 0.7 0.880.47 0.57 0.67 0.7 1.0 0.660.52 0.38 0.82 0.88 0.66 1.0 0.00.20.40.60.81.0ProxyAdaProxy Fig. 24. SRCC for various devices in the NAS-Bench-201 search space on CIFAR-100 (left) and ImageNet16-120(right) datasets. Pixel3 is our proxy device. SRCC values boosted with AdaProxy are highlighted. (a) Edge GPU (b) Edge TPU (c) Eyeriss (d) FPGA (e) Raspberry Pi 4Fig. 25. Exhaustive search results for different target devices on NAS-Bench-201 architectures (CIFAR-100dataset) [17,29]. Pixel3 is the proxy. SRCC values before and after proxy adaptation are shown in the leftsubfigure of Fig.24. (a) Edge GPU (b) Edge TPU (c) Eyeriss (d) FPGA (e) Raspberry Pi 4Fig. 26. Exhaustive search results for different target devices on NAS-Bench-201 architectures (ImageNet16-120 dataset) [17,29]. Pixel3 is the proxy. SRCC values before and after proxy adaptation are shown in theright subfigure of Fig.24.B.4 Results on FBNetSearch Space.Similar to MobileNet-V2, the FBNet search space is also layer-wise with a fixedmacro-architecture, which defines the number of layers and input/output dimensions of each layerand fixes the first and last three layers, with the remaining layers to be searched. As shown inFig.27, the overall search space consists of 22 searchable blocks: the first and last inverted residualProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:31blocks, and five stages within each of which there are at most four blocks. For each block, the kernelsize can be chosen from “3, 5", and the expansion ratio can be “1, 3, 6". For the first and last 1x1convolution layer,groupconvolution can be used to reduce the computation complexity. Also, eachblock can be skipped. Thus, there are nine candidate specification choice for each block (detailedconfigurations are shown in Table 2 of [45]).              Fig. 27. FBNet search space and architectural encoding.Proxy Adaptation.We have 22 searchable blocks in total, the configuration for each of whichcan be chosen from the nine architecture candidates (including “Skip”). Then, we can still use a9-dimension one-hot vector to encode each block. Using Pixel3 as the default proxy and the sameapproach as in AppendixB.2, we can solve Eqn.(3)to create an AdaProxy device, which has SRCCof close to 0.9 or higher with the target device. In the transfer learning process, the numbers ofsampled architectures for training are: 80 (Edge GPU), 40 (Raspi4), 30 (FPGA), 20 (Eyeriss). Inaddition, validation uses another 20 sampled architectures for tuning the hyperparameter. 1.0 0.78 0.77 0.71 0.610.78 1.0 0.98 0.95 0.870.77 0.98 1.0 0.96 0.880.71 0.95 0.96 1.0 0.940.61 0.87 0.88 0.94 1.0 0.00.20.40.60.81.0 Edge GPURaspi4Pixel3EyerissFPGA1.0 0.07 0.09 0.05 0.010.07 1.0 0.95 0.94 0.910.09 0.95 1.0 0.97 0.90.05 0.94 0.97 1.0 0.930.01 0.91 0.9 0.93 1.0 0.00.20.40.60.81.0ProxyAdaProxy Fig. 28. SRCC for various devices in the FBNet search spaces [29], on CIFAR-100 (left) and ImageNet16-120(right) datasets respectively. Pixel3 is the proxy. SRCC values boosted by AdaProxy are highlighted.Results.Our key focus is to achieve a high SRCC between the proxy (or AdaProxy) deviceand the target device, such that we can efficiently transfer the optimal architectures found on theproxy (or AdaProxy) device to the new target device without measuring latencies of thousands ofarchitectures and building a new latency predictor. Since the accuracy results for architectures inthe FBNet search space are not available [29], we only show in Fig.28the SRCC values instead,both with and without proxy adaptation. We can see that cross-platform SRCCs are greatly boosted(i.e., close to 1) with AdaProxy. By Theorem3.1, the strong latency monotonicity ensures thatthe optimal architectures found on the proxy (or AdaProxy) device can be applied to new targetdevices.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:32 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei RenB.5 Results on nn-MeterThe nn-Meter dataset released in [49,50] includes measured inference latencies of 2000 modelsfrom 11 search spaces, including GoogLeNet, MnasNet and ProxylessNAS, etc on three mobiledevices and one edge device: Pixel4 (Cortex A76 CPU), Mi9 (Adreno 640 GPU), Pixel3XL (Adreno630 GPU), and Myriad VPU (Intel Movidius NCS2 edge device). Fig.19shows that the devicesalready have strong latency monotonicity with SRCC values greater than 0.9 on six search spaces.Among the remaining five search spaces, MobileNet-V1 and AlexNet are obsolete and phasedout for hardware-ware NAS. Next, we apply our proxy adaptation technique on the other threesearch spaces:MobileNet-V3,NAS-Bench-201, andProxylessNAS, which are mainstream andwidely-used backbones in SOTA NAS algorithms. MyriadAdreno630Adreno640CortexA761.0 0.62 0.61 0.540.62 1.0 0.99 0.940.61 0.99 1.0 0.920.54 0.94 0.92 1.0 0.00.20.40.60.81.0ProxyAdaProxy Fig. 29. SRCC for various devices in the MobileNet-V3 search space [49,50]. SRCC values boosted by AdaProxyare highlighted.B.5.1 MobileNet-V3.In our experiment, the number of searchable blocks in the MobileNet-V3space is fixed as 12. For each block, the input, mid, and output channel number, and kernel sizeare variable from a set of candidates. Instead of directly using the kernel-based latency predictorin [50] that has a very large dimensionality for one-hot architectural encoding, we use a simpleblock-level encoding method. Concretely, for each block, we use one-hot encodings for the input,mid, and output channel number and kernel size, respectively, and then concatenate these fourone-hot vectors together to get the block-level encoding. After further concatenating the encodingvector of each block, we have a 530-dimension encoding for each architecture. Then, we build asimple 4-layer fully-connected neural network (with 500/250/100 neurons in each hidden layer)and train it on the latency data of the edge device Myriad VPU (used as the proxy), which hasa low SRCC with the other three mobile devices. For the neural network training, we split the1000 data samples (we use 1000 out of 2000 models for this experiment) into 800 for training and200 for testing, set the learning rate as 0.01 and the batch size as 128, and train the network for500 epoches. We also compress the network to 2 layers by fixing the first layer and appending itwith another layer for the proxy device’s latency predictor. Next, we apply the transfer learningmethod in Section5.4to the three target mobile devices. We use latencies of 150 architecturesfor transfer learning on Adreno 630/640 and 160 architectures for Cortex A76, respectively, whileusing 20 architectures for validation. The relatively larger number of latency measurements neededfor boosting the latency monotonicity is due in great part to two reasons: (1) MobileNet-V3 is afairly complex search space, with many searchable operators; and (2) we intentionally address aProc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search 34:33challenging case where the proxy device has weak monotonicity with all the target devices. Theresults are shown in Fig.29, where we can see that the SRCC values are significantly increasedafter proxy adaptation. despite the initially weak latency monotonicity.B.5.2 ProxylessNAS.This search space is based on the MobileNet-V2 backbone, with variableexpansion ratios, kernel sizes, inputs, and output channel numbers [10]. We apply a similar encodingapproach as in the MobileNet-V3 space, and get a 783-dimension vector for each architecture in thenn-Meter dataset [49]. The Myriad VPU and Adreno 640 GPU is the only pair of devices with SRCCless than 0.9, with the pre-adaptation SRCC already being 0.87. We directly use the 783-dimensionvector to perform transfer learning by updating the weights pre-trained on the proxy device (Adreno640 GPU), with latencies of 30 sampled architectures for training and 20 architectures for validation.The results are shown in Fig.30, demonstrating that the SRCC can be increased to over 0.9 afterproxy adaptation. MyriadAdreno630Adreno640CortexA761.0 0.9 0.87 0.960.9 1.0 0.99 0.950.87 0.99 1.0 0.930.96 0.95 0.93 1.0 0.00.20.40.60.81.0ProxyAdaProxy Fig. 30. SRCC for various devices in the ProxylessNAS search space [49,50]. SRCC values boosted by AdaProxyare highlighted. We only apply proxy adaptation for the Myriad VPU edge device, since the other targetdevices already have high SRCC of 0.9+ with the proxy device.B.5.3 NAS-Bench-201.For the NAS-Bench-201 space, we adopt the same encoding method asdescribed in AppendixB.3. We also consolidate the latency datasets released by three differentresearch studies [19,29,50] for the NAS-Bench-201 search space. The Myriad VPU edge deviceis the default proxy, while the target devices include FPGA, GPU, CPU, mobile, edge device, DSP,and TPU. Using the latencies of 20 sampled architectures for validation, the numbers of sampledarchitectures for training in the transfer learning process are: 30 for Edge GPU, Edge TPU, Eyeriss,FPGA, Raspi4, Adreno 630, Adreno 640, Cortex A76, CPU 855, GPU 855, 50 for DSP 855, 55 forPixel3 and Jetson, 60 for GTX and i7, and 90 for Jetson 16. Note that the dataset in [49] only containslatencies for 2000 architectures in the NAS-Bench-201 space, and hence we only consider these2000 architectures when calculating the cross-device SRCC values. We show the results in Fig.31.While the latencies are measured by different research groups, on very different devices and usingdifferent deep learning frameworks, our proxy adaptation technique can still successfully increasethe SRCC values to 0.9+, significantly boosting the otherwise weak latency monotonicity andkeeping the total latency evaluation cost atO(1)for hardware-aware NAS.Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.34:34 Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, and Shaolei Ren Edge GPUEdge TPUEyerissFPGAPixel3Raspi4Adreno630Adreno640CortexA76MyriadGTX 1080 TiKryo485Hexagon690Adreno640*i7-7820xJetson NanoJetson Nano 161.0 0.63 0.5 0.6 0.490.550.580.580.510.750.440.740.750.550.440.750.620.63 1.0 0.670.570.550.580.680.68 0.7 0.870.710.780.780.670.710.720.720.5 0.67 1.0 0.85 0.8 0.820.780.780.830.670.740.73 0.6 0.760.740.620.440.6 0.570.85 1.0 0.870.920.890.890.860.670.630.83 0.7 0.880.630.770.410.490.55 0.8 0.87 1.0 0.920.880.880.880.610.680.720.590.870.680.640.320.550.580.820.920.92 1.0 0.920.920.910.650.680.780.660.910.680.710.360.580.680.780.890.880.92 1.0 0.990.970.760.720.870.760.980.720.810.470.580.680.780.890.880.920.99 1.0 0.970.760.720.870.760.980.720.810.470.51 0.7 0.830.860.880.910.970.97 1.0 0.740.790.83 0.7 0.950.780.740.450.750.870.670.670.610.650.760.760.74 1.0 0.690.880.890.740.680.84 0.80.440.710.740.630.680.680.720.720.790.69 1.0 0.69 0.6 0.720.99 0.6 0.540.740.780.730.830.720.780.870.870.830.880.69 1.0 0.910.860.690.950.720.750.78 0.6 0.7 0.590.660.760.76 0.7 0.89 0.6 0.91 1.0 0.75 0.6 0.910.780.550.670.760.880.870.910.980.980.950.740.720.860.75 1.0 0.72 0.8 0.460.440.710.740.630.680.680.720.720.780.680.990.69 0.6 0.72 1.0 0.6 0.540.750.720.620.770.640.710.810.810.740.84 0.6 0.950.91 0.8 0.6 1.0 0.720.620.720.440.410.320.360.470.470.45 0.8 0.540.720.780.460.540.72 1.0 0.00.20.40.60.81.0 ProxyAdaProxy Fig. 31. SRCC for various devices in the NAS-Bench-201 search space with latencies collected from [19,29,49,50]. SRCC values boosted by AdaProxy are highlighted. “Adreno640" and “Adreno640*" denote modellatencies measured by [50] and [19] respectively. “Jetson Nano" and “Jetson Nano 16" represent the latenciesof FP32 and FP16 models correspondingly. Proc. ACM Meas. Anal. Comput. Syst., Vol. 5, No. 3, Article 34. Publication date: December 2021.