歷屆研究作品

大學部專題

碩士畢業論文

本研究利用雷射線性掃描技術和三角量測原理,量測物體表面高度以建立三維(3D)點雲模型,應用於植物表型建模。使用線性雷射和兩台相機,其中雷射法線垂直於量測平台,相機使用三角量測原理安裝於雷射兩側。通過量測平台建立雷射偏移量與高度的對應關係,並使用棋盤格校正不同區域的比例尺。為提高精確度,使用步進馬達和螺桿傳動系統來精確定位平台,並設置對稱相機以解決遮蔽問題。本研究使用陶瓷塊規作為精度驗證的目標物,將量測平台與相機光軸、線性雷射交會於同一位置,所得出的精度最為準確,誤差百分比可達到0.08%。經由陶瓷塊規的拍攝與驗證,證明該方法能有效重建植物的3D點雲,並對植物學和農業研究具有重要應用價值。

This study utilizes laser linear scanning technology and triangulation principles to measure surface height and create a three-dimensional (3D) point cloud model, applied to plant phenotyping. A linear laser and two cameras are used, with the laser perpendicular to the measurement platform and the cameras positioned on either side of the laser based on triangulation principles. The relationship between laser offset and height is established through the measurement platform, and a checkerboard pattern is used to calibrate the scale in different areas. To improve accuracy, a stepper motor and screw transmission system are employed for precise platform positioning, and symmetric cameras are set up to address occlusion issues. Ceramic blocks with known precision are used as calibration targets, aligning the measurement platform with the optical axis of the cameras and the linear laser at the same position for optimal accuracy, achieving an error percentage of up to 0.08%. Validation with these ceramic blocks confirms that the method effectively reconstructs the 3D point cloud of plants and holds significant application value for botanical and agricultural research.

在自動駕駛系統的開發中,準確的物件偵測和精確的深度估計至關重要。自動駕駛中的許多事故源於未能識別物體或錯誤估計物體深度。本研究透過應用 YOLO(You Only Look Once) 深度學習演算法進行物體識別,並融合攝影機和光達數據來解決這些挑戰。本研究實現了精確的攝影機和光達感測器校準,平均重投影誤差為 0.32%(9.96像素),顯著提升了物體識別和深度估計的準確性。YOLOv8 模型在訓練中取得了 0.76 的精確率和 0.64 的召回率。在模型預測結果的混肴矩陣中,驗證數據的真陽性率分別為:交通錐 1.00、行人 0.60、汽車 0.52、機車 0.60。測試數據集上的真陽性率分別為交通錐 1.00、行人 0.58、汽車 0.54、機車 0.65。透過結合 YOLO 的物體偵測結果和光達數據,系統提升了感知能力,有效地進行障礙物偵測和可靠的路徑規劃。結果顯示,此方法在提升自動駕駛車輛感知能力方面具有潛力,有助於實現更安全和高效的自動駕駛技術。本研究推進了感測器融合技術,並提供了一種可擴展的解決方案,將先進的物體偵測算法與實際車輛感測系統整合,為未來的感測器整合和影像識別精確度提升奠定了基礎。

In autonomous driving systems, accurate object detection and precise depth estimation are crucial. Many accidents in autonomous driving result from failure to recognize objects or misestimation of object depth. This research addresses these challenges by applying the YOLO (You Only Look Once) deep learning algorithm for object recognition and fusing camera and LiDAR data. The study achieved a precise camera and LiDAR sensor calibration with an average reprojection error of 0.32% (9.96 pixels), leading to significant improvements in object recognition and depth estimation accuracy. YOLOv8 achieved a mean precision of 0.76 and recall of 0.64, with high true positive rates 1.00 for “Traffic Cone”, 0.60 for “Person”, 0.52 for “Car”, and 0.60 for “Motorcycle” on the validation dataset. On the test dataset, the true positive rates were 1.00 for “Traffic Cone”, 0.58 for “Person”, 0.54 for “Car”, and 0.65 for “Motorcycle”. By combining YOLO-based object detection with LiDAR data, the system enhances perception capabilities, enabling effective obstacle detection and reliable path planning. The results demonstrate the potential of this approach to improve autonomous vehicle perception, contributing to safer and more efficient autonomous driving technology. This research advances sensor fusion technology and provides a scalable solution for integrating advanced object detection algorithms with real-world vehicle sensing systems, setting the stage for future improvements in sensor integration and image recognition accuracy.

本研究利用立體視覺技術量測犬隻在行走時的步態週期運動,透過標記於髖骨、股骨、脛骨及跖骨位置上的點,量測髖關節、膝關節及踝關節之相對運動軌跡,經由此軌跡模型以利提供關節機構之開發。立體視覺技術需要將相機進行校正才能得到正確的深度資訊,而標記點在空間中的深度位置是未知的,拍攝時僅能以相機與實驗犬接近距離之校正數據進行校正,校正的精確度可以由標記點重心的Y軸像素座標差距來判定,隨著標記點實際深度距離與校正數據的距離越遠,其誤差成正比增加。
為了使量測精度提升,校正程序標準化,以步進馬達搭配傳動機構自動化拍攝,以相同間距方式拍攝校正板影像,建立不同深度之校正數據。初始時,使用預設深度之校正數據,待獲得進一步深度資訊,採用新的深度校正數據,透過進一步疊代運算,求得更準確的深度座標,建立精確步伐的軌跡資訊,未來可提供人工關節機構開發更符合受測者的自然步態軌跡。本實驗藉由已知物體間距的資料比較未進行疊代的數據及疊代後的數據差異,經由疊代運算後可將深度誤差減少至1公釐以下。

This study utilizes stereo vision technology to measure the movement of dogs’ gait during walking. The relative motion trajectory of the hip, knee, and ankle joints can be measured by marking and identifying dot marks on the hip, femur, tibia, and metatarsal. The relative motion trajectory model can aid in developing artificial joint mechanisms. Stereo vision technology requires the camera to be calibrated for accurate depth information, but the exact depth positions of the dot markers in space are unknown. In order to obtain the depth information, the camera must be calibrated with specified calibration patterns. The accuracy of this correction may vary based on the distance between the dot markers and the calibration space plain. A stepping motor with a transmission mechanism automatically captures images of the calibration patterns at the same pitch, establishing calibration data at different depths to improve the measurement accuracy and standardize the calibration procedure.
The marked dots on the dog’s hind limbs are detected through iterative processes using the established calibration data to obtain more accurate depth information. This study compares the difference between the depths of the test targets with and without iterations, and the former has more accuracy in the depth measurement. After iteration, the depth error can be reduced to less than 1 mm. Overall, this study demonstrates the effectiveness of using stereo vision technology and an iterative calibration process for measuring and analyzing dogs’ gait during walking.