MIT-AVT研究之—軟件部分:數(shù)據(jù)管道和深度學(xué)習(xí)模型訓(xùn)練
為了深入了解人類和自動駕駛車輛在快速變化的交通系統(tǒng)中的交互。
麻省理工學(xué)院進(jìn)行了自動駕駛車輛技術(shù)的研究( MIT - AVT ):
( 1 )進(jìn)行大規(guī)模的現(xiàn)實世界駕駛數(shù)據(jù)的收集,包括高清視頻,以推動基于深度學(xué)習(xí)的內(nèi)外感知系統(tǒng);
( 2 )通過將視頻數(shù)據(jù)與車輛狀態(tài)數(shù)據(jù)、駕駛員特征、心理模型和自我報告的技術(shù)體驗相結(jié)合,全面了解人類如何與車輛自動化技術(shù)進(jìn)行互動;
( 3 )確定如何以挽救生命的方式改進(jìn)與自動化使用有關(guān)的技術(shù)和其他因素。
為了以上研究先進(jìn)的嵌入式系統(tǒng)編程、軟件工程、數(shù)據(jù)處理、分布式計算、計算機(jī)視覺和深度學(xué)習(xí)技術(shù)被用于大規(guī)模自然駕駛數(shù)據(jù)的收集和分析。
&auto=0">
這項研究提出了MIT-AVT研究背后的方法論,旨在定義和啟發(fā)下一代自然駕駛研究。本研究的設(shè)計原則是,除了先前成功的“自然駕駛研究”( NDS )方法之外,還利用計算機(jī)視覺和深度學(xué)習(xí)的力量自動提取人類與各種自主車輛技術(shù)水平相互作用的模式。(1)使用AI來分析大規(guī)模數(shù)據(jù)中的整體駕駛體驗;(2)使用人類專業(yè)知識和定性分析深入挖掘數(shù)據(jù)以獲得特定案例的理解。迄今為止,該數(shù)據(jù)集包括78名參與者,7,146天參與,275,589英里和35億視頻幀。 有關(guān)MITAVT數(shù)據(jù)集大小和范圍的統(tǒng)計信息會定期更新為hcai.mit.edu/avt。
前文簡單介紹了 MIT - AVT ,以及該研究的硬件部分,本文主要介紹該研究的軟件部分。
軟件部分:數(shù)據(jù)管道和深度學(xué)習(xí)模型訓(xùn)練
基于RIDER強(qiáng)大、可靠和靈活的硬件架構(gòu),這里的是軟件框架也很龐大,它可以記錄和處理原始傳感數(shù)據(jù),并通過數(shù)千個支持GPU的計算核心的許多步驟,獲取關(guān)于自動駕駛車輛技術(shù)背景下人類行為的知識和見解。圖8顯示了從原始時間戳傳感器數(shù)據(jù)到可操作知識的旅程。高級步驟是( 1 )數(shù)據(jù)清理和同步,( 2 )自動或半自動數(shù)據(jù)注釋、上下文解釋和知識提取,以及( 3 )聚合分析和可視化。

圖8:MIT-AVT數(shù)據(jù)管道,展示了從數(shù)據(jù)中卸載、清理、同步和提取知識的過程。左邊是依賴約束的異步分布式計算框架。中間是執(zhí)行多個層次的知識提取的高級過程序列。右邊是管道產(chǎn)生的廣泛的數(shù)據(jù)類別,按大小進(jìn)行組織。
本節(jié)將討論數(shù)據(jù)管道(圖8),其中包括在RIDER盒上實現(xiàn)的軟件,可實現(xiàn)數(shù)據(jù)流和記錄。此外,還將討論用于在中央服務(wù)器上卸載和處理數(shù)據(jù)的軟件。在RIDER盒上運行的軟件的操作要求如下:
1 )每當(dāng)車輛開啟時,接通電源
2 )在外部固態(tài)驅(qū)動器上創(chuàng)建行程目錄
3 )將所有數(shù)據(jù)流重新定向到帶有時間戳的行程文件中
4 )實時記錄元數(shù)據(jù)并將其傳輸?shù)綄嶒炇?br /> 5 )車輛關(guān)閉后斷電
A.微控制器
Knights of CANelot的微控制器運行一個小型C語言程序,負(fù)責(zé)為RIDER系統(tǒng)提供與車輛同步的動力。
默認(rèn)情況下,該微控制器處于睡眠狀態(tài),等待特定的CAN消息。通過收聽車輛的CAN總線,這個程序可以識別特定信號的CAN消息何時開始,這意味著汽車已經(jīng)打開。如果觀察到此信號,則C語言程序?qū)④囕v的電源連接到系統(tǒng)的其余部分,開始數(shù)據(jù)收集。當(dāng)指定的消息結(jié)束時,意味著汽車關(guān)閉,微控制器向Banana Pi發(fā)送信號,關(guān)閉所有文件并正常關(guān)機(jī)。然后,它等待60秒鐘,最終斷開系統(tǒng)其余部分的電源,并進(jìn)入其原始睡眠狀態(tài)。
B.單板計算機(jī)
我們的單板計算機(jī)Banana Pi包含一個32GBsd卡,存儲RIDER文件系統(tǒng)、軟件和配置文件。Banana Pi使用定制內(nèi)核模塊和改進(jìn)的Bannanian操作系統(tǒng)運行一個改進(jìn)的Linux內(nèi)核,并增強(qiáng)了性能和安全性。通過禁用不必要的內(nèi)核模塊和刪除無關(guān)的Linux服務(wù),性能得到了提高。安全性增強(qiáng)包括禁用所有CAN傳輸,從而禁止向車輛系統(tǒng)惡意或非故意傳輸致動消息。其他安全改進(jìn)包括更改網(wǎng)絡(luò)設(shè)置以防止任何遠(yuǎn)程連接登錄。特定的MIT機(jī)器被設(shè)置為白名單,以允許通過物理連接更改配置文件,默認(rèn)的系統(tǒng)服務(wù)也被修改,以便在系統(tǒng)啟動時運行一系列本地安裝的程序來管理數(shù)據(jù)收集。
C.啟動腳本
每當(dāng)系統(tǒng)啟動時,Banana Pi都會運行一系列數(shù)據(jù)記錄初始化bash啟動腳本。首先,Pi上的板載時鐘與保持高分辨率定時信息的實時時鐘同步。然后加載用于設(shè)備通信的模塊,如UART、I2C、SPI、UVC和CAN,以允許與輸入數(shù)據(jù)流交互。。啟動一個監(jiān)控腳本,如果從 Knights of CANelot微控制器接收到指定的信號,該腳本會關(guān)閉系統(tǒng),另外一個GSM監(jiān)控腳本會在失去連接后幫助重新連接到蜂窩網(wǎng)絡(luò)。最后的初始化步驟是啟動python腳本Dacman和Lighthouse。
D. Dacman
Dacman表示管理所有數(shù)據(jù)流的中央數(shù)據(jù)處理程序腳本。 它使用名為trip_dacman.json的配置文件,該文件包含攝像機(jī)的唯一設(shè)備ID。此外,它還包含與存儲在其中的RIDER盒相關(guān)聯(lián)的唯一RIDER ID。該配置文件還包含與該駕駛員相關(guān)的主題、車輛和研究的唯一ID值。一旦啟動,Dacman就會在外部固態(tài)驅(qū)動器上創(chuàng)建一個旅程目錄,該目錄根據(jù)使用唯一命名約定創(chuàng)建的日期命名:rider-id_date_timestamp(例如20_20160726_1469546998634990)。 此行程目錄包含trip_dacman.json的副本,任何與數(shù)據(jù)相關(guān)的CSV文件,反映所包含的子系統(tǒng),以及一個名為trip_specs.json的規(guī)范文件,其中包含表示每個子系統(tǒng)的起點和終點以及行程本身的微秒時間戳。
Dacman為每個子系統(tǒng)調(diào)用一個管理器python腳本(例如audio_manager.py或can_manager.py),這使得相關(guān)的系統(tǒng)調(diào)可用來記錄數(shù)據(jù)。在當(dāng)前車輛行程的整個過程中,所有數(shù)據(jù)都被寫入CSV文件,每行包含時間戳信息。Dacman調(diào)用另外兩個用C語言編寫的程序來幫助生成這些文件:用于管理攝像機(jī)的cam2hd和用于創(chuàng)建CAN文件的dump_can。 音頻或攝像機(jī)數(shù)據(jù)分別記錄為RAW和H264格式,附帶的CSV表示記錄每幀的微秒時間戳。如果在Dacman運行時遇到任何錯誤,系統(tǒng)最多會重新啟動兩次以嘗試解決它們,如果無法解決它們,則會關(guān)閉。
E. Cam2HD
Cam2hd是一個用C語言編寫的程序,可以打開并記錄所有攝像機(jī)數(shù)據(jù)。。它依賴于V4L ( Video4Linux ),這是一個開源項目,包含Linux中的攝像頭驅(qū)動程序集。V4L通過將傳入圖像分辨率設(shè)置為720 p,允許訪問連接到RIDER的攝像機(jī),并允許寫入原始H264幀。
F. DumpCAN
Dump _ CAN是用C語言編寫的另一個程序,用于配置和接收來自all winner A20 CAN控制器的數(shù)據(jù)。這個程序使用CAN 4linux模塊來產(chǎn)生一個CSV,包含從連接的CAN總線接收的所有CAN數(shù)據(jù)。此外,它提供CAN控制器的低水平操作。這允許Dump _ CAN在CAV控制器上設(shè)置僅監(jiān)聽模式,從而提高了安全性。通過消除在CAN網(wǎng)絡(luò)上監(jiān)聽消息時發(fā)送確認(rèn)的需要,可以最大限度地減少對CAN總線上現(xiàn)有系統(tǒng)的任何可能干擾。
G. Lighthouse
Lighthouse是一個python腳本,可以將有關(guān)每次行程的信息發(fā)送到Homebase。發(fā)送的信息包括行程時間信息、GPS數(shù)據(jù)、功耗、溫度和可用的外部驅(qū)動空間。通信間隔在dacman配置文件中指定。所有通信都是以JSON格式發(fā)送的,并且由于其速度的原因,使用基于橢圓曲線25519的公鑰加密。這意味著每個RIDER都使用服務(wù)器的公鑰,以及唯一的公鑰/私鑰來加密和傳輸數(shù)據(jù)。Lighthouse是用Python編寫的,依賴于libzmq/ libna。
H. Homebase
Homebase也是一個腳本,用于接收、解密和記錄從Lighthouse接收到的所有信息,并將其存儲在RIDER數(shù)據(jù)庫中,這允許遠(yuǎn)程監(jiān)控驅(qū)動器空間和系統(tǒng)運行狀況良好。所有附加密鑰管理都在這里完成,以便解密來自每個唯一盒子的消息。
I. Heartbeat
Heartbeat是一個面向工程師的界面,該界面顯示RIDER系統(tǒng)狀態(tài)信息,以驗證成功的操作或了解潛在的系統(tǒng)故障。 Heartbeat使用從homease提交到數(shù)據(jù)庫的信息來跟蹤各種RIDER日志。這對于分析車輛的當(dāng)前狀態(tài)是有用的,并且有助于確定哪些儀表車輛需要驅(qū)動交換(由于硬盤驅(qū)動器空間不足)或系統(tǒng)維修。這也有助于驗證任何修復(fù)是否成功。
J. RIDER Database
PostgreSQL數(shù)據(jù)庫用于存儲所有傳入的行程信息,以及存儲卸載到存儲服務(wù)器的所有行程信息。經(jīng)過額外處理后,可以將有關(guān)每次行程的有用信息添加到數(shù)據(jù)庫中。然后,可以對查詢進(jìn)行結(jié)構(gòu)化,以獲得發(fā)生特定事件或條件的特定行程或時間。下列表格是行程處理管道的基本資料:
儀表:用于安裝RIDER盒的日期和車輛ID
參與:獨特的學(xué)科和研究IDs被結(jié)合起來,以識別主要和次要的驅(qū)動因素
車手:車手ID與筆記和IP地址配對
車輛:車輛信息與車輛ID配對如品牌和型號、制造日期、顏色和特定技術(shù)的可用性
行程:為每次集中卸載的旅行以及學(xué)習(xí),車輛,主題和騎手ID提供唯一的ID。還提供有關(guān)同步狀態(tài)、可用攝像機(jī)類型和子系統(tǒng)數(shù)據(jù)的信息。它還包含了關(guān)于旅行本身內(nèi)容的元數(shù)據(jù),例如太陽的存在、GPS頻率以及某些技術(shù)應(yīng)用或加速事件的存在。
epochs epoch-label:每個epoch類型的表都被標(biāo)記并用于標(biāo)識發(fā)生的行程和視頻幀范圍(例如,特斯拉中的autopilot使用的是epochs autopilot)
homebase日志:包含來自homebase腳本的流式日志信息,用于跟蹤RIDER系統(tǒng)的運行狀況和狀態(tài)。
K. Cleaning
將原始行程數(shù)據(jù)卸載到存儲服務(wù)器后,必須檢查所有行程是否存在任何不一致。一些行程可能會有不一致的地方,可以修復(fù),比如時間戳信息可以從多個文件中獲得,或者行程中非必要的子系統(tǒng)出現(xiàn)故障(例如IMU或音頻)。在不可恢復(fù)的情況下,例如在旅行期間拔出相機(jī)的事件,該行程數(shù)據(jù)將從數(shù)據(jù)集中刪除。如果該行程滿足某些過濾約束條件,例如當(dāng)車輛開啟但在再次關(guān)閉之前不移動,則也可以從數(shù)據(jù)集中移除具有有效數(shù)據(jù)文件的旅程數(shù)據(jù)。
L. Synchronization
在完成清洗和過濾后,有效的行程要經(jīng)過一系列的同步步驟。首先,使用最新的攝像機(jī)開始時間戳和最早的攝像機(jī)結(jié)束時間戳,將從每個攝像機(jī)收集的每個幀的時間戳對齊在單個視頻CSV文件中,每秒30幀。在低照明條件下,相機(jī)可能會以每秒15幀的速度降至錄制狀態(tài)。在這些情況下,可以重復(fù)一些幀以在同步的視頻中實現(xiàn)每秒30幀。
在對齊所有原始視頻之后,可以以每秒30幀的速度創(chuàng)建新的同步視頻文件。然后通過創(chuàng)建一個CSV來解碼CAN數(shù)據(jù),其中所有相關(guān)的CAN消息都作為列,同步的幀ID作為行。然后,根據(jù)與每個解碼后的CAN消息最近的時間戳,逐幀插入CAN消息值。然后可以生成一個最終的同步可視化,顯示所有的視頻流,并可以在同一個視頻的單獨面板中提供信息。然后,數(shù)據(jù)就可以由任何運行統(tǒng)計數(shù)據(jù)、檢測任務(wù)或手動注釋任務(wù)的算法進(jìn)行處理。
【REFERENCES】
[1] A. Davies, “Oh look, more evidencehumans shouldn’t be driving,” May 2015. [Online]. Available:https://www.wired.com/2015/05/oh-look-evidence-humans-shouldnt-driving/
[2] T. Vanderbilt and B. Brenner, “Traffic:Why we drive the way we do(and what it says about us) , alfred a.knopf, new york, 2008; 978-0-307-26478-7,” 2009.
[3] W. H. Organization, Global statusreport on road safety 2015. World Health Organization, 2015.
[4] M. Buehler, K. Iagnemma, and S. Singh,The DARPA urban challenge:autonomous vehicles in city traffic.springer, 2009, vol. 56.
[5] V. V. Dixit, S. Chand, and D. J. Nair,“Autonomous vehicles: disengagements,accidents and reaction times,” PLoS one,vol. 11, no. 12, p.e0168054, 2016.
[6] F. M. Favar`o, N. Nader, S. O. Eurich,M. Tripp, and N. Varadaraju“Examining accident reports involving autonomousvehicles in california,”PLoS one, vol. 12, no. 9, p. e0184952,2017.
[7] R. Tedrake, “Underactuated robotics:Algorithms for walking, running, swimming, flying, and manipulation (coursenotes for mit 6.832),” 2016.
[8] M. R. Endsley and E. O. Kiris, “Theout-of-the-loop performance problem and level of control inautomation,” Human factors, vol. 37, no. 2, pp. 381–394, 1995.
[9] B. Reimer, “Driver assistance systemsand the transition to automated vehicles: A path to increase older adultsafety and mobility?” Public Policy & Aging Report, vol. 24, no. 1,pp. 27–31, 2014.
[10] K. Barry, “Too much safety could makedrivers less safe,” July 2011. [Online]. Available:https://www.wired.com/2011/07/ active-safety-systems-could-create-passive-drivers/
[11] V. L. Neale, T. A. Dingus, S. G.Klauer, J. Sudweeks, and M. Goodman, “An overview of the 100-car naturalistic study and findings,”National Highway Traffic Safety Administration,Paper, no. 05-0400, 2005.
[12] T. A. Dingus, S. G. Klauer, V. L.Neale, A. Petersen, S. E. Lee, J. Sudweeks, M. Perez, J. Hankey, D.Ramsey, S. Gupta et al., “The 100-car naturalistic driving study, phase ii-resultsof the 100-car field experiment,” Tech. Rep., 2006.
[13] S. G. Klauer, T. A. Dingus, V. L.Neale, J. D. Sudweeks, D. J. Ramsey et al., “The impact of driver inattentionon near-crash/crash risk: An analysis using the 100-car naturalisticdriving study data,” 2006.
[14] K. L. Campbell, “The shrp 2naturalistic driving study: Addressing driver performance and behavior in trafficsafety,” TR News, no. 282, 2012.
[15] T. Victor, M. Dozza, J. B¨argman,C.-N. Boda, J. Engstr¨om, C. Flannagan, J. D. Lee, and G. Markkula, “Analysis ofnaturalistic driving study data: Safer glances, driver inattention, andcrash risk,” Tech. Rep., 2015.
[16] M. Benmimoun, F. Fahrenkrog, A.Zlocki, and L. Eckstein, “Incident detection based on vehicle can-data withinthe large scale field operational test (eurofot),” in 22nd Enhanced Safety ofVehicles Conference (ESV 2011), Washington, DC/USA, 2011.
[17] L. Fridman, P. Langhans, J. Lee, andB. Reimer, “Driver gaze region estimation without use of eye movement,”IEEE Intelligent Systems, vol. 31, no. 3, pp. 49–56, 2016.
[18] L. Fridman, J. Lee, B. Reimer, and T.Victor, “Owl and lizard: patterns of head pose and eye pose in driver gazeclassification,” IET Computer Vision, vol. 10, no. 4, pp. 308–313, 2016.
[19] L. Fridman, “Cost of annotation inmachine learning, computer vision, and behavioral observation domains,” inProceedings of the 2018 CHI Conference on Human Factors in ComputingSystems, Under Review, 2018.
[20] B. C. Russell, A. Torralba, K. P. Murphy,and W. T. Freeman, “Labelme: a database and web-based tool for imageannotation,” International journal of computer vision, vol. 77, no. 1,pp. 157–173, 2008.
[21] M. Cordts, M. Omran, S. Ramos, T.Rehfeld, M. Enzweiler, R. Benenson,U. Franke, S. Roth, and B. Schiele, “Thecityscapes dataset for semantic urban scene understanding,” inProceedings of the IEEE Conference on Computer Vision and PatternRecognition, 2016, pp. 3213–3223.
[22] R. R. Knipling, “Naturalistic drivingevents: No harm, no foul, no validity,” in Driving Assessment 2015:International Symposium on Human Factors in Driver Assessment,Training, and Vehicle Design. Public Policy Center, University of IowaIowa City, 2015, pp. 196–202.
[23] ——, “Crash heterogeneity: implicationsfor naturalistic driving studies and for understanding crash risks,”Transportation Research Record: Journal of the Transportation ResearchBoard, no. 2663, pp. 117–125, 2017.
[24] L. Fridman, B. Jenik, and B. Reimer,“Arguing machines: Perceptioncontrol system redundancy and edge case discoveryin real-world autonomous driving,” arXiv preprintarXiv:1710.04459, 2017.
[25] V. Shankar, P. Jovanis, J.Aguero-Valverde, and F. Gross, “Analysis of naturalistic driving data: prospectiveview on methodological paradigms,” Transportation Research Record:Journal of the Transportation Research Board, no. 2061, pp. 1–8, 2008.
[26] N. Kalra and S. M. Paddock, “Drivingto safety: How many miles of driving would it take to demonstrateautonomous vehicle reliability?” Transportation Research Part A: Policy andPractice, vol. 94, pp. 182–193, 2016.
[27] I. Goodfellow, Y. Bengio, and A.Courville, Deep learning. MIT press, 2016.
[28] R. Hartley and A. Zisserman, Multipleview geometry in computer vision. Cambridge university press, 2003.
[29] J. Deng, W. Dong, R. Socher, L.-J. Li,K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,”in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEEConference on. IEEE, 2009, pp. 248–255.
[30] G. A. Miller, R. Beckwith, C.Fellbaum, D. Gross, and K. J. Miller, “Introduction to wordnet: An on-line lexical database,” International journal of lexicography, vol. 3, no. 4, pp.235–244, 1990.
[31] K. He, X. Zhang, S. Ren, and J. Sun,“Deep residual learning for image recognition,” in Proceedings of the IEEEconference on computer vision and pattern recognition, 2016, pp. 770–778.
[32] O. Russakovsky, J. Deng, H. Su, J.Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M.Bernstein et al., “Imagenet large scale visual recognition challenge,”International Journal of Computer
Vision, vol. 115, no. 3, pp. 211–252, 2015.
[33] T.-Y. Lin, M. Maire, S. Belongie, J.Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoftcoco: Common objects in context,” in European conference oncomputer vision. Springer, 2014, pp. 740–755.
[34] K. He, G. Gkioxari, P. Dollar, and R.Girshick, “Mask r-cnn,” in The IEEE International Conference on ComputerVision (ICCV), Oct 2017.
[35] J. Dai, H. Qi, Y. Xiong, Y. Li, G.Zhang, H. Hu, and Y.Wei, “Deformable convolutional networks,” in The IEEEInternational Conference on Computer Vision (ICCV), Oct 2017.
[36] A. Geiger, P. Lenz, C. Stiller, and R.Urtasun, “Vision meets robotics:The kitti dataset,” International Journalof Robotics Research (IJRR),2013.
[37] A. Geiger, P. Lenz, and R. Urtasun,“Are we ready for autonomous driving? the kitti vision benchmark suite,”in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
[38] M. Menze and A. Geiger, “Object sceneflow for autonomous vehicles,” in Conference on Computer Vision andPattern Recognition (CVPR), 2015.
[39] M. Cordts, M. Omran, S. Ramos, T.Scharw¨achter, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele,“The cityscapes dataset,” 2015.
[40] H. Zhao, J. Shi, X. Qi, X. Wang, andJ. Jia, “Pyramid scene parsing network,” in The IEEE Conference onComputer Vision and Pattern Recognition (CVPR), July 2017.
[41] P. Wang, P. Chen, Y. Yuan, D. Liu, Z.Huang, X. Hou, and G. Cottrell,“Understanding convolution for semantic segmentation,” arXiv preprint arXiv:1702.08502, 2017.
[42] S. Liu, J. Jia, S. Fidler, and R.Urtasun, “Sgn: Sequential grouping networks for instance segmentation,” in The IEEEInternational Conference on Computer Vision (ICCV), Oct 2017.
[43] G. J. Brostow, J. Fauqueur, and R.Cipolla, “Semantic object classes in video: A high-definition ground truthdatabase,” Pattern Recognition Letters, vol. 30, no. 2, pp. 88–97, 2009.
[44] Y. Tian, P. Luo, X. Wang, and X. Tang,“Pedestrian detection aided by deep learning semantic tasks,” inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2015, pp. 5079–5087.
[45] V. Badrinarayanan, F. Galasso, and R.Cipolla, “Label propagation in video sequences,” in Computer Vision andPattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010, pp.3265–3272.
[46] P. Liu, S. Han, Z. Meng, and Y. Tong,“Facial expression recognition via a boosted deep belief network,” inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2014, pp. 1805–1812.
[47] Z. Yu and C. Zhang, “Image basedstatic facial expression recognition with multiple deep network learning,” inProceedings of the 2015 ACM on International Conference on MultimodalInteraction. ACM, 2015, pp. 435–442.
[48] E. A. Hoffman and J. V. Haxby,“Distinct representations of eye gaze and identity in the distributed humanneural system for face perception,” Nature neuroscience, vol. 3, no. 1, pp.80–84, 2000.
[49] J. Wi´sniewska, M. Rezaei, and R.Klette, “Robust eye gaze estimation,” in International Conference on ComputerVision and Graphics.Springer, 2014, pp. 636–644.
[50] L. Fridman, H. Toyoda, S. Seaman, B.Seppelt, L. Angell, J. Lee, B. Mehler, and B. Reimer, “What can bepredicted from six seconds of driver glances?” in Proceedings of the 2017CHI Conference on Human Factors in Computing Systems. ACM, 2017,pp. 2805–2813.
[51] F. Vicente, Z. Huang, X. Xiong, F. Dela Torre, W. Zhang, and D. Levi,“Driver gaze tracking and eyes off the road detection system,” IEEE Transactions on Intelligent TransportationSystems, vol. 16, no. 4, pp. 2014–2027, 2015.
[52] H. Gao, A. Y¨uce, and J.-P. Thiran, “Detectingemotional stress from facial expressions for driving safety,” inImage Processing (ICIP), 2014 IEEE International Conference on. IEEE,2014, pp. 5961–5965.
[53] I. Abdic, L. Fridman, D. McDuff, E.Marchi, B. Reimer, and B. Schuller,“Driver frustration detection from audio and video in the wild,” inKI 2016: Advances in Artificial Intelligence:39th Annual German Conference on AI, Klagenfurt, Austria, September26-30, 2016, Proceedings, vol. 9904. Springer, 2016, p. 237.
[54] J. Shotton, T. Sharp, A. Kipman, A.Fitzgibbon, M. Finocchio, A. Blake,M. Cook, and R. Moore, “Real-time humanpose recognition in parts from single depth images,” Communicationsof the ACM, vol. 56, no. 1, pp. 116–124, 2013.
[55] A. Toshev and C. Szegedy, “Deeppose:Human pose estimation via deep neural networks,” in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognition, 2014, pp.1653–1660.
[56] J. J. Tompson, A. Jain, Y. LeCun, andC. Bregler, “Joint training of a convolutional network and a graphicalmodel for human pose estimation,” in Advances in neuralinformation processing systems, 2014, pp. 1799–1807.
[57] D. Sadigh, K. Driggs-Campbell, A.Puggelli, W. Li, V. Shia, R. Bajcsy, A. L. Sangiovanni-Vincentelli, S. S. Sastry,and S. A. Seshia, “Datadriven probabilistic modeling and verification ofhuman driver behavior,”
Formal Verification and Modeling inHuman-Machine Systems, 2014.
[58] R. O. Mbouna, S. G. Kong, and M.-G.Chun, “Visual analysis of eye state and head pose for driver alertnessmonitoring,” IEEE transactions on intelligent transportation systems, vol.14, no. 3, pp. 1462–1469, 2013.
[59] B. Zhou, A. Lapedriza, J. Xiao, A.Torralba, and A. Oliva, “Learning deep features for scene recognition usingplaces database,” in Advances in neural information processing systems,2014, pp. 487–495.
[60] H. Xu, Y. Gao, F. Yu, and T. Darrell,“End-to-end learning of driving models from large-scale video datasets,” inThe IEEE Conference on Computer Vision and Pattern Recognition(CVPR), July 2017.
[61] M. Bojarski, D. Del Testa, D.Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U.Muller, J. Zhang et al., “End to end learning for self-driving cars,”arXiv preprint arXiv:1604.07316,2016.
[62] “Advanced vehicle technologyconsortium (avt),” 2016. [Online]. Available: http://agelab.mit.edu/avt
[63] L. Fridman, D. E. Brown, W. Angell, I.Abdi´c, B. Reimer, and H. Y. Noh, “Automated synchronization of drivingdata using vibration and steering events,” Pattern RecognitionLetters, vol. 75, pp. 9–15, 2016.
[64] R. Li, C. Liu, and F. Luo, “A designfor automotive can bus monitoring system,” in Vehicle Power and PropulsionConference, 2008. VPPC’08. IEEE. IEEE, 2008, pp. 1–5.
廣告 最新資訊
-
“汽車爬坡試驗方法”將有國家標(biāo)準(zhǔn)
2026-03-03 12:44
-
十年耐久監(jiān)管時代:電池系統(tǒng)開發(fā)策略將如何
2026-03-03 12:44
-
聯(lián)合國法規(guī)R59對機(jī)動車備用消聲系統(tǒng)的工程
2026-03-03 12:08
-
聯(lián)合國法規(guī)R58對后下部防護(hù)裝置的工程化約
2026-03-03 12:07
-
聯(lián)合國法規(guī)R57對摩托車前照燈配光性能的工
2026-03-03 12:07





廣告


























































