DRIVER ASSISTANT SYSTEM

MAJOR PROJECT - 2024

Driver Assistance System

Road safety has always been an important concern for development and public safety since it is one of the main causes of injury and death across the world. Active engagement with the development of modern transportation systems, road safety measures, and other collaborations could assist in reducing the number of traffic accidents. The proposed research seeks to make a contribution significantly to the various initiatives to broaden Advanced Driver Assistance Systems (ADASs) and Autonomous Vehicles (AVs). It comprises a set up of six components: drowsiness recognition and alerting system, lane detection, lane departure warning system, lane maintaining assistance system, object detection and recognition, and collision warning system. The drowsiness detection and alert system is engaged throughout the ride to monitor the driver’s focus levels. Lane detection is then carried out using an ultrafast lane detector, that delivers spatial awareness by recognizing road lane markings. The lane departure system thus activates whenever the vehicle deviates from the lane without signaling, alerting the driver immediately. As the vehicle continues to deviate from the lane, the lane keeping assistance system detects and adjusts the vehicle’s steering to keep it in the lane. Concurrently, recognition and detection of objects applying YOLO recognize a wide range of items on the road, that serve as crucial in collision warning systems. The collision warning system employs OpenCV and distance measurement to monitor the vehicle’s surroundings and alert the driver about possible collision risks. The integration aims to enhance driver safety through providing timely warnings and offering assistance throughout the driving process.

Proposed system


The proposed system has 2 point of views

  1. POV_1 -> Monitoring the external environment
  2. POV_2 -> Monitoring the internal environment

POV_1 : Monitoring the external environment

class UML diagram

Collaboration UML diagram

Sequence UML diagram

Usecase UML Diagram

POV_2 : Monitoring the internal environment

UML diagram

Psudo code

Code setup :

Requirements :

  • Python 3.7+

  • OpenCV, Scikit-learn, onnxruntime, pycuda and pytorch.

  • Install :

    The requirements.txt file should list all Python libraries that your notebooks depend on, and they will be installed using:

    pip install -r requirements.txt
    

Examples :

  • Download YOLO Series Onnx model :

    Use the Google Colab notebook to convert

    Modelrelease versionLink
    YOLOv5v6.2Open In Colab
    YOLOv6/Lite0.4.0Open In Colab
    YOLOv7v0.1Open In Colab
    YOLOv88.1.27Open In Colab
    YOLOv9v0.1Open In Colab
  • Convert Onnx to TenserRT model :

    Need to modify onnx_model_path and trt_model_path before converting.

    python convertOnnxToTensorRT.py -i <path-of-your-onnx-model>  -o <path-of-your-trt-model>
    
  • Quantize ONNX models :

    Converting a model to use float16 instead of float32 can decrease the model size.

    python onnxQuantization.py -i <path-of-your-onnx-model>
    
  • Video Inference :

    • Setting Config :

      Note : can support onnx/tensorRT format model. But it needs to match the same model type.

    lane_config = {
     "model_path": "./TrafficLaneDetector/models/culane_res18.trt",
     "model_type" : LaneModelType.UFLDV2_CULANE
    }
    
    object_config = {
     "model_path": './ObjectDetector/models/yolov8l-coco.trt',
     "model_type" : ObjectModelType.YOLOV8,
     "classes_path" : './ObjectDetector/models/coco_label.txt',
     "box_score" : 0.4,
     "box_nms_iou" : 0.45
    }
    
    TargetModel TypeDescribe
    LanesLaneModelType.UFLD_TUSIMPLESupport Tusimple data with ResNet18 backbone.
    LanesLaneModelType.UFLD_CULANESupport CULane data with ResNet18 backbone.
    LanesLaneModelType.UFLDV2_TUSIMPLESupport Tusimple data with ResNet18/34 backbone.
    LanesLaneModelType.UFLDV2_CULANESupport CULane data with ResNet18/34 backbone.
    ObjectObjectModelType.YOLOV5Support yolov5n/s/m/l/x model.
    ObjectObjectModelType.YOLOV5_LITESupport yolov5lite-e/s/c/g model.
    ObjectObjectModelType.YOLOV6Support yolov6n/s/m/l, yolov6lite-s/m/l model.
    ObjectObjectModelType.YOLOV7Support yolov7 tiny/x/w/e/d model.
    ObjectObjectModelType.YOLOV8Support yolov8n/s/m/l/x model.
    ObjectObjectModelType.YOLOV9Support yolov9s/m/c/e model.
    ObjectObjectModelType.EfficientDetSupport efficientDet b0/b1/b2/b3 model.
    • Run for POV_1:
    python demo.py
    
    • Run for POV_2: (change the directory to Drowsiness detector)
    python detect.py
    

Hardware setup :

Hardware requirements :

&nbsp &nbsp &nbsp 1. Jetson orin nano
&nbsp &nbsp &nbsp 2. Servo motor
&nbsp &nbsp &nbsp 3. Pi camera module 3
&nbsp &nbsp &nbsp 4. Power bank (10000 to 20000 mAh)
&nbsp &nbsp &nbsp 5. Arduino Nano
&nbsp &nbsp &nbsp 6. ADXL - 345
&nbsp &nbsp &nbsp 7. GPS Neo - 6m
&nbsp &nbsp &nbsp 8. GSM SIM800I
&nbsp &nbsp &nbsp 9. LM2596 step converter
&nbsp &nbsp &nbsp 10. Zero PCB
&nbsp &nbsp &nbsp 11. 12v 2A Power supply
&nbsp &nbsp &nbsp 12. 15 pin to 22 pin cable

Hardware Integration :

Results :

POV_1 : Monitoring the external environment

  1. Front Collision Warning System :

  1. Lane Departure Warning System :

  1. Lane Keeping Assist System :

POV_2 : Monitoring the internal environment

[ EAR -> Eye aspect ratio ]

  1. Active driver :

  1. Drowsy driver :


This repository contains the submission for our project by Madadapu HemanthSai and Team, B. Tech CSE - AIML student from MLR Institute of Technology. The project encompasses solutions to problems demonstrating proficiency in machine learning and computer vision.

Submitted By

  • Name: Team - 10
  • Program: B.Tech CSE - AIML
  • Institution: MLR Institute of Technology
  • Contact:

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.