<rss version="2.0" xmlns:atom="https://www.w3.org/2005/Atom">
  <channel>
    <title>Transport Research International Documentation (TRID)</title>
    <link>https://trid.trb.org/</link>
    <atom:link href="https://trid.trb.org/Record/RSS?s=PHNlYXJjaD48cGFyYW1zPjxwYXJhbSBuYW1lPSJkYXRlaW4iIHZhbHVlPSJhbGwiIC8+PHBhcmFtIG5hbWU9InN1YmplY3Rsb2dpYyIgdmFsdWU9Im9yIiAvPjxwYXJhbSBuYW1lPSJ0ZXJtc2xvZ2ljIiB2YWx1ZT0ib3IiIC8+PHBhcmFtIG5hbWU9ImxvY2F0aW9uIiB2YWx1ZT0iMCIgLz48L3BhcmFtcz48ZmlsdGVycz48ZmlsdGVyIGZpZWxkPSJpbmRleHRlcm1zIiB2YWx1ZT0iJnF1b3Q7RGlzcGxheXMmcXVvdDsiIG9yaWdpbmFsX3ZhbHVlPSImcXVvdDtEaXNwbGF5cyZxdW90OyIgLz48L2ZpbHRlcnM+PHJhbmdlcyAvPjxzb3J0cz48c29ydCBmaWVsZD0icHVibGlzaGVkIiBvcmRlcj0iZGVzYyIgLz48L3NvcnRzPjxwZXJzaXN0cz48cGVyc2lzdCBuYW1lPSJyYW5nZXR5cGUiIHZhbHVlPSJwdWJsaXNoZWRkYXRlIiAvPjwvcGVyc2lzdHM+PC9zZWFyY2g+" rel="self" type="application/rss+xml" />
    <description></description>
    <language>en-us</language>
    <copyright>Copyright © 2026. National Academy of Sciences. All rights reserved.</copyright>
    <docs>http://blogs.law.harvard.edu/tech/rss</docs>
    <managingEditor>tris-trb@nas.edu (Bill McLeod)</managingEditor>
    <webMaster>tris-trb@nas.edu (Bill McLeod)</webMaster>
    
    <item>
      <title>Spinal segmentation algorithm for modelling Chinese digital human models</title>
      <link>https://trid.trb.org/View/2635598</link>
      <description><![CDATA[Low-dose spinal CT images often suffer from issues such as blurred boundaries, significant noise, and poor contrast, which complicate manual segmentation. Traditional spinal image segmentation algorithms, although fast, generally lack precision and require manual intervention. Meanwhile, deep learning-based methods require extensive datasets for support, limiting their widespread applicability. To overcome these limitations, this paper introduces the 3D-TSUnet, this method first employs traditional segmentation algorithms for pre-segmentation, followed by detailed segmentation using the refined 3D-Unet network. Comparisons with manual segmentation show a 98.28% reduction in self-intersections, 95.05% decrease in highly refractive edges, 89.59% reduction in spikes, and 96.48% reduction in incorrect partitions, with segmentation time reduced by 91.67%. These results demonstrate that the proposed network efficiently performs low-dose CT spinal segmentation, offering substantial practical value for developing Chinese human finite element models and advancing related research.]]></description>
      <pubDate>Wed, 11 Mar 2026 14:45:15 GMT</pubDate>
      <guid>https://trid.trb.org/View/2635598</guid>
    </item>
    <item>
      <title>Evaluating the safety implications of glass curtain wall LED media façade on night-time driving: The driver’s perspective</title>
      <link>https://trid.trb.org/View/2666694</link>
      <description><![CDATA[The presence of glass curtain wall LED media façade (G-L-M) along roads has disrupted the ambient light environment, impeded drivers’ visibility and posed potential risks to night-time driving safety. This study investigates the effects of G-L-M technical parameters on driving distraction by constructing night-time simulation scenarios and obtaining the driver’s reaction time for recognising small targets. The study revealed that the field of view, colour and state of G-L-M are the most crucial factors, whereas luminance and area have relatively minor effects. Specifically, the G-L-M located in the fovea centralis and peripheral field of view regions (−30° to −15°) were more likely to cause driving distraction than those in the central field of view. Red, green and blue are associated with higher reaction times and failure rates, whereas white has the lowest reaction time and failure rate. Additionally, we found that the difference between high and low luminance was not significant. However, appropriate high luminance can enhance the recognition rate of small targets. Dynamic G-L-M significantly increases reaction time compared to static G-L-M. This study can provide a reference for assessing the effects of G-L-M to ensure night-time roadway driving safety and for formulating future regulations and designing G-L-M lighting.]]></description>
      <pubDate>Tue, 03 Mar 2026 14:48:53 GMT</pubDate>
      <guid>https://trid.trb.org/View/2666694</guid>
    </item>
    <item>
      <title>Mid-air haptics for automotive HUDs: A sketch anticipation and design fiction study</title>
      <link>https://trid.trb.org/View/2667363</link>
      <description><![CDATA[Innovations in the automotive industry, such as autonomous driving, AI assistants, head-up displays, and mid-air haptic touchless interactions, promise transformative benefits but may also introduce unanticipated risks and ethical concerns. To explore these potential challenges, we conducted a multi-stage study: first, we engaged 27 engineers specializing in touchless and automotive systems to envision future applications of mid-air haptics and head-up displays. Insights from this anticipatory design fiction informed the creation of high-fidelity storyboard sketches depicting six hypothetical scenarios. Using these storyboards and a custom questionnaire, we then surveyed 135 drivers across nine countries to assess their views on technology acceptance, interface usability, and responsible innovation. Results revealed significant demographic variability, alongside a dual sentiment: while drivers express enthusiasm for technological integration, they also voice concerns about safety, user control, and privacy. Our findings not only inform safer and more user-centered automotive innovation but also offer a multimodal framework for evaluating and guiding emerging technologies across diverse fields.]]></description>
      <pubDate>Mon, 23 Feb 2026 11:24:46 GMT</pubDate>
      <guid>https://trid.trb.org/View/2667363</guid>
    </item>
    <item>
      <title>MTRCP: Multimodal Two-Level Fusion Architecture for Roadside Cooperative Perception</title>
      <link>https://trid.trb.org/View/2625411</link>
      <description><![CDATA[With the rapid development of self-driving technologies, autonomous driving still faces challenges because of the complexity of traffic environments, making accurate and stable environmental perception crucial. Roadside units (RSUs) can significantly enhance the perception range of autonomous vehicles, effectively addressing blind spots and improving traffic safety. This article proposes the Multimodal Two-Level Fusion Architecture for Roadside Cooperative Perception, a novel framework that leverages RSUs to enable comprehensive and over-the-horizon perception capabilities. We fuse point clouds and images at the first level to generate local detection results and then perform a second-level fusion to integrate these local results, producing full-area perception. Experimental validation with the DAIR-V2X-seq dataset and data collected at the China Telecom Smart Grid Test Park demonstrates the effectiveness and feasibility of the proposed cooperative perception architecture.]]></description>
      <pubDate>Mon, 23 Feb 2026 11:23:12 GMT</pubDate>
      <guid>https://trid.trb.org/View/2625411</guid>
    </item>
    <item>
      <title>Enhancing Safety on Mountainous Curves: A Priority-Enabled Warning System in a Connected Vehicle Environment</title>
      <link>https://trid.trb.org/View/2613355</link>
      <description><![CDATA[Compound curves in mountainous areas often present significant safety challenges. While connected vehicle-based curve warning systems have been developed and tested, they predominantly focus on either lateral control through speed warnings or longitudinal control via lane deviation warnings. An integrated system combining warnings while prioritizing primary and secondary alerts remains underexplored. Moreover, current implementations often rely on heads-down display devices, which can divert drivers’ attention away from the road. In contrast, in-vehicle heads-up displays (HUDs) offer a promising solution to provide real-time, safety-critical instructions without disrupting drivers’ situational awareness. This study aims to develop a Multi-dimensional Priority-enabled Curve Warning System via Heads-up Display (MP-CWS_HUD) in the CV environment, integrating longitudinal speed and lateral deviation warnings, and verify its effectiveness by involving human subjects. Forty-five participants evaluated the MP-CWS_HUD. Results show that the developed MP-CWS_HUD significantly enhances curve driving safety under specific circumstances.]]></description>
      <pubDate>Fri, 20 Feb 2026 15:28:25 GMT</pubDate>
      <guid>https://trid.trb.org/View/2613355</guid>
    </item>
    <item>
      <title>Decision-Fatigue Mitigation in Automotive HMI System</title>
      <link>https://trid.trb.org/View/2663381</link>
      <description><![CDATA[With the advent of digital displays in driver cabins in commercial vehicles, drivers are being offered many features that convey some useful or critical information to drivers or prompt the driver to act. Due to the availability of a vast number of features, drivers face decision fatigue in choosing the appropriate features. Many are unaware of all available functionalities displayed in the Human Machine Interface (HMI) System, leading to a bare minimum usage or complete neglect of helpful features. This not only affects driving efficiency but also increases cognitive load, especially in complex driving scenarios. To alleviate the fatigue faced by drivers and to reduce the induced lethargy to choose appropriate features, we propose an AI driven recommendation agent/system that helps the driver choose the features. Instead of manually choosing between multiple settings, the driver can simply activate the recommendation mode, allowing the system to optimize selections dynamically. The novelty of this proposal focuses on introducing Intelligence in HMI Systems in such a way that it will maximize the operational usage and reduce decision fatigue in drivers. In this paper, we aim to propose a novel metric - “Decision fatigue index” to conceptualize both – the reduction in driver's cognitive load and AI models to capture, train based on the data from the driver preferences, road conditions, vehicle dynamics and user customizations. The most relevant mitigation/intervention strategies will be augmented in the HMI, which enhances ease of use, improves safety, and ensures that drivers receive the most relevant assistance.]]></description>
      <pubDate>Fri, 20 Feb 2026 15:28:18 GMT</pubDate>
      <guid>https://trid.trb.org/View/2663381</guid>
    </item>
    <item>
      <title>AI-Powered LiDAR Point Cloud Understanding and Processing: An Updated Survey</title>
      <link>https://trid.trb.org/View/2591234</link>
      <description><![CDATA[Current advances have enhanced the efficiency and availability of 3D data processing and scene understanding technologies, which confirms the pivotal status of point cloud data structure in 3D data transmission and storage. However, the intrinsic defects of point cloud data structure have always been a considerable challenge for model complexity and accuracy. As substantial researches have been conducted over the years, multitudinous compelling architectures applied in LiDAR point clouds are proposed in succession. To facilitate further research, we present a systematic integrated survey focusing explicitly on more than 200 key contributions to deep learning-based 3D LiDAR point cloud processing over the recent five years, detailing the revolution of feature extraction techniques of point cloud and specific deep learning-based tasks. Based on an introduction of the hardware sensors and devices of several LiDAR systems, the working mechanism and principle of 3D LiDAR point cloud acquisition and storage are interpreted. Moreover, about 30 publicly accessible datasets, including classical and latest outputs, are summarized and collated in accordance with different tasks. Comprehensive insights into the research challenges and opportunities in this topic are suggested. The contribution of this paper is to offer an up-to-date and all-sided overview of this realm, inspiring innovative explorations and further achievements in the AI-powered LiDAR point cloud understanding and processing.]]></description>
      <pubDate>Thu, 19 Feb 2026 17:02:31 GMT</pubDate>
      <guid>https://trid.trb.org/View/2591234</guid>
    </item>
    <item>
      <title>MonoGSDet: Monocular 3D Object Detection With Gaussian Splatting in Autonomous Driving</title>
      <link>https://trid.trb.org/View/2658997</link>
      <description><![CDATA[Monocular 3D object detection has gained considerable attention because of its cost-effectiveness and practical applicability, particularly in autonomous driving and robotics. Most of existing methods typically tackle this task by regressing grouped 3D attributes of the target objects. However, accurate 3D object detection is still challenging due to the inherent ill-posedness of mapping from single 2D images to 3D spaces. This paper proposes a novel monocular 3D detection framework MonoGSDet that integrates a detection network with 3D Gaussian Splatting (3DGS) to enhance depth feature learning for more accurate 3D object detection. The proposed method consists of training and testing stages. In the training stage, to facilitate learning 3D spatial features in the backbone, a Lightweight 3D Gaussian Predictor (LGP) module is proposed to predict 3DGS-like attributes from the feature maps, followed by volume rendering to optimize 3D representations. In addition, a Ground Plane Constraint Module (GPCM) is added in the detection head to optimize the horizon positions by using a homography matrix and ground plane constraints for more accurate depth prediction. In the testing stage, given a single image, the trained network with the backbone and detection head will predict the 3D attributes of objects. Our experiments on the KITTI dataset demonstrate the effectiveness of our approach, achieving the state-of-the-art performance without compromising inference.]]></description>
      <pubDate>Thu, 19 Feb 2026 10:53:38 GMT</pubDate>
      <guid>https://trid.trb.org/View/2658997</guid>
    </item>
    <item>
      <title>Efficient 3D object annotation via vision-derived pseudo-LiDAR and Vision Language Model (VLM) validation</title>
      <link>https://trid.trb.org/View/2618058</link>
      <description><![CDATA[To advance autonomous driving, accurate 3D object annotation is crucial for target recognition, environment perception, and high-precision map construction. However, producing high-quality 3D annotated data is costly and time-consuming. In particular, for sparse point cloud data, it is both labor-intensive and error-prone to annotate 3D objects. To address this challenge, this paper proposes an efficient automated annotation pipeline that integrates pseudo-point cloud generation with validation using a vision language model (VLM). Our approach supplements sparse point cloud data, generates pseudo-labels, and leverages a VLM model to validate and filter annotations, thereby creating a closed-loop automated system. Experiments on a real-world dataset collected by an autonomous vehicle demonstrate significant improvements in annotation accuracy and efficiency.]]></description>
      <pubDate>Wed, 11 Feb 2026 09:17:43 GMT</pubDate>
      <guid>https://trid.trb.org/View/2618058</guid>
    </item>
    <item>
      <title>Sound absorption characteristics of rubberised porous asphalt mixture</title>
      <link>https://trid.trb.org/View/2618015</link>
      <description><![CDATA[Traffic noise pollution has increased the demand for low-noise pavements. Improving the sound absorption performance of asphalt mixtures is crucial for reducing tyre-pavement noise. This study investigates the sound absorption characteristics of rubberised porous asphalt mixtures (RPAM) at macro- and micro-scales, focusing on the effect of rubber particles. The results indicated that there was a significant correlation between the maximum aggregate size, rubber particle content and the sound absorption performance of RPAM. Furthermore, the pore characteristics of RPAM with different maximum aggregate sizes and rubber particle contents were extracted by X-ray CT scanning and 3D reconstruction techniques, and the impact on sound absorption performance was investigated. The grey correlation analysis showed that pore characteristic parameters including throat length and coordination number had a more significant impact on the sound absorption performance. Reducing the maximum aggregate size and rubber particle content can increase these parameters, thereby improving the sound absorption performance.]]></description>
      <pubDate>Mon, 09 Feb 2026 13:55:11 GMT</pubDate>
      <guid>https://trid.trb.org/View/2618015</guid>
    </item>
    <item>
      <title>LGVINS: LiDAR-GPS-Visual and Inertial System Based Multi-Sensor Fusion for Smooth and Reliable UAV State Estimation</title>
      <link>https://trid.trb.org/View/2617938</link>
      <description><![CDATA[With the development of Autonomous Unmanned Aerial Vehicle's (UAV's), Precise state estimation is a fundamental aspect of autonomous flight and plays a critical role in enabling robots specially in GPS denied environment to operate safely, reliably, and effectively across a wide range of applications and operational scenarios. In this paper, we propose a tightly-coupled multi-sensor filtering framework for robust UAV/UGV state estimation, which integrates data from an Inertial Measurement Unit (IMU), a stereo camera, GPS, and 3D range measurements from two Light Detection and Ranging (LiDAR) sensors. The proposed LGVINS system significantly improves the accuracy and robustness of state estimation in both structured and unstructured outdoor environments, such as bridge inspections, open fields, urban city and areas near buildings. It also improves positioning accuracy in scenarios with or without GPS signals. The goal is to exploit the fact that these sensor modalities have mutually exclusive strengths, the visual, inertial and the Lidar sensor techniques are implemented to compensate for the robots state estimate errors in multiple outdoor challenging environment. It effectively reduces long-term trajectory drift and ensures smooth, continuous state estimation, regardless of GPS satellite availability. We demonstrate and evaluate the LGVINS approach on public dataset as well as our own dataset collected from the proposed hardware integration on UAV, deployed on computationally-constrained systems. This demonstrates that the proposed system achieves higher accuracy and robustness in state estimation across various environments compared to currently available methods.]]></description>
      <pubDate>Mon, 09 Feb 2026 08:53:26 GMT</pubDate>
      <guid>https://trid.trb.org/View/2617938</guid>
    </item>
    <item>
      <title>Pose Trajectory Formation Mechanism for Corner Module Vehicles in Highway Lane-Changing Scenarios</title>
      <link>https://trid.trb.org/View/2617935</link>
      <description><![CDATA[The lateral motion of traditional vehicles cannot be actively controlled, but can only be realized through coordinated control of longitudinal and yaw movement. In highway lane-changing scenarios, an angle between the road tangent and the vehicle orientation must be created. This allows the vehicle to have a velocity in the road vertical direction, thereby the lateral displacement of vehicle relative to the road is generated. However, if the vehicle's lateral motion can be controlled actively, the angle between road tangent and vehicle orientation becomes unnecessary. The needless angle may generate adverse effects on driver visibility, vehicle stability and comfort. To solve this problem, this paper presents a pose trajectory concept for lane-changing, including longitudinal and lateral positions and yaw angles, using a corner module vehicle as the research subject. The kinematic and dynamic constraints of the vehicle are established. Considering lane-changing efficiency, lateral performance, yaw angle deviation from the road direction, and yaw performance, the optimization objectives are established. A particle swarm optimization algorithm is utilized to find the optimal trajectory of the proposed pose trajectories in highway lane-changing scenarios, and the traceability of the optimal pose trajectory is verified using model predictive control method based on a four-wheel steering vehicle model. Finally, the two optimal trajectories of the proposed pose trajectories and traditional position trajectories are compared. The results show that the proposed pose trajectories can reduce the yaw angle deviation and substantially improve vehicle yaw stability and comfort without reducing lane-changing efficiency.]]></description>
      <pubDate>Mon, 09 Feb 2026 08:53:26 GMT</pubDate>
      <guid>https://trid.trb.org/View/2617935</guid>
    </item>
    <item>
      <title>CoRange: Collaborative Range-Aware Adaptive Fusion for Multi-Agent Perception</title>
      <link>https://trid.trb.org/View/2617961</link>
      <description><![CDATA[Collaborative perception facilitates information exchange and communication among neighbouring agents and is a competitive solution to address the limited field of view of a single vehicle in autonomous driving. Although collaborative perception has brightened application prospects and can greatly enhance vehicle safety in transportation systems, it often requires substantial communication and computation overhead to achieve efficient collaboration. Even worse, the sparse point cloud data caused by distant objects impairs the perceptual abilities of individual agents, and unavoidable errors in agents' poses result in misaligned global observations and low perception accuracy. In this work, we focus on the multi-agent collaborative perception problem and present a Collaborative Range-aware adaptive fusion framework named CoRange to achieve the communication-efficient and fusion-effective perception of the rapidly changing environment. First, we present a range-aware communication mechanism to reduce the misjudgment and minimize the bandwidth consumption, by fully considering the characters of range information and selectively transmitting the features of critical regions. Then, we design a local and agent-wise attention module to handle point clouds at different distances and capture relationships between heterogeneous agents, with an objective of enhancing the detection accuracy and robustness against pose errors. Finally, we adopt a hierarchical adaptive fusion method to effectively integrate the features representing diverse semantic information and provide multi-source representations for ego agents. Experiments on both simulated and real-world datasets are conducted to validate the efficiency and effectiveness of the proposed approach, and the experimental results demonstrate that our approach exhibits superior performance in limited communication bandwidths and noisy environments when LiDAR-based object detection tasks are carried out.]]></description>
      <pubDate>Mon, 09 Feb 2026 08:53:26 GMT</pubDate>
      <guid>https://trid.trb.org/View/2617961</guid>
    </item>
    <item>
      <title>An ergonomics study on side- and rear-view CMS display locations in two lane-changing scenarios</title>
      <link>https://trid.trb.org/View/2636356</link>
      <description><![CDATA[In an effort to address existing knowledge gaps in human factors research on camera monitor system (CMS) display layout, this study investigated the effects of side- and rear-view CMS display locations under two lane-changing scenarios with different levels of urgency. Fifty participants performed a simulated lane-changing task four times in each of 12 driving conditions (2 side-view display locations × 3 rear-view display locations × 2 driving scenarios), and their response time, number of collisions, eyes-off-the-road time, and subjective ratings (accuracy, learnability, memorability, intuitiveness, preference, and satisfaction) were collected. The study findings highlight the importance of aligning CMS display locations with driver's mental model by positioning the displays near the traditional mirror locations while minimizing eye gaze travel distances by positioning them close to driver's forward line of sight. Additionally, the relative importance of these two conflicting design characteristics may vary depending on the context-dependent roles of CMS displays.]]></description>
      <pubDate>Thu, 05 Feb 2026 09:16:42 GMT</pubDate>
      <guid>https://trid.trb.org/View/2636356</guid>
    </item>
    <item>
      <title>Influence analysis of soundscape design for street activities in the virtual environment</title>
      <link>https://trid.trb.org/View/2613516</link>
      <description><![CDATA[In recent years, urban street improvements have focused on creating people-centered walkable environments to encourage diverse activities. However, these designs emphasize visual elements, with limited attention to soundscapes. This study explores the impact of street soundscapes in activity spaces on walkability evaluations. First, according to a literature review, soundscape scenarios for street activities were made with 3DCG street models. Then, a survey was conducted to evaluate the virtual streets in terms of perceptions and behavioral willingness. Finally, the impacts of the street soundscapes on the walkability evaluation were analyzed. The results showed that, while the visual impacts of street design are greater than the sound impact, the street soundscapes for excitement and relaxation differently affect behavioral willingness to walk around and stay through their multi-dimensional impacts on perceived walkability of impression and functions.]]></description>
      <pubDate>Tue, 20 Jan 2026 10:17:49 GMT</pubDate>
      <guid>https://trid.trb.org/View/2613516</guid>
    </item>
  </channel>
</rss>