<rss version="2.0" xmlns:atom="https://www.w3.org/2005/Atom">
  <channel>
    <title>Transport Research International Documentation (TRID)</title>
    <link>https://trid.trb.org/</link>
    <atom:link href="https://trid.trb.org/Record/RSS?s=PHNlYXJjaD48cGFyYW1zPjxwYXJhbSBuYW1lPSJkYXRlaW4iIHZhbHVlPSJhbGwiIC8+PHBhcmFtIG5hbWU9InN1YmplY3Rsb2dpYyIgdmFsdWU9Im9yIiAvPjxwYXJhbSBuYW1lPSJ0ZXJtc2xvZ2ljIiB2YWx1ZT0ib3IiIC8+PHBhcmFtIG5hbWU9ImxvY2F0aW9uIiB2YWx1ZT0iMCIgLz48L3BhcmFtcz48ZmlsdGVycz48ZmlsdGVyIGZpZWxkPSJpbmRleHRlcm1zIiB2YWx1ZT0iJnF1b3Q7RXllJnF1b3Q7IiBvcmlnaW5hbF92YWx1ZT0iJnF1b3Q7RXllJnF1b3Q7IiAvPjwvZmlsdGVycz48cmFuZ2VzIC8+PHNvcnRzPjxzb3J0IGZpZWxkPSJwdWJsaXNoZWQiIG9yZGVyPSJkZXNjIiAvPjwvc29ydHM+PHBlcnNpc3RzPjxwZXJzaXN0IG5hbWU9InJhbmdldHlwZSIgdmFsdWU9InB1Ymxpc2hlZGRhdGUiIC8+PC9wZXJzaXN0cz48L3NlYXJjaD4=" rel="self" type="application/rss+xml" />
    <description></description>
    <language>en-us</language>
    <copyright>Copyright © 2026. National Academy of Sciences. All rights reserved.</copyright>
    <docs>http://blogs.law.harvard.edu/tech/rss</docs>
    <managingEditor>tris-trb@nas.edu (Bill McLeod)</managingEditor>
    <webMaster>tris-trb@nas.edu (Bill McLeod)</webMaster>
    
    <item>
      <title>Analyzing drivers visual attention towards intersection conflict warning system: A study using driving simulator and eye tracking system</title>
      <link>https://trid.trb.org/View/2657083</link>
      <description><![CDATA[Unsignalized intersections are considered one of the most hazardous road locations, where drivers must carefully process visual information to make safe decisions, as improper attention allocation or lack of information on approaching traffic can lead to crashes. Intersection Conflict Warning System (ICWS) has been identified as a potential solution, however its influence on drivers' visual performance remains unexplored. This study aims to investigate the effect of ICWS on drivers' visual performance at unsignalized intersections using a driving simulator and eye tracking system. Forty-six licensed drivers participated in this study, and drivers eye movement behavior towards ICWS was analyzed under various warning and intersection visibility conditions. Additionally, the effect of education about ICWS was also examined. Experimental results showed that at the restricted-view intersections, drivers had 46% longer fixation durations and 34% more fixations on warning signboards compared to clear-view intersections. Under ICWS activated conditions, drivers exhibited significantly longer fixation duration, and a higher proportion (72%) reacted after gazing at the signboard compared to the non-activated ICWS conditions (39%). Furthermore, middle aged drivers demonstrated a shorter time to first fixation on the signboard than younger drivers under ICWS activated conditions. The findings highlight that ICWS enables drivers to notice warning signboards promptly, initiate earlier visual searches for conflicting vehicles, and respond more quickly to potential conflicts, supporting its application as an effective countermeasure for enhancing safety at unsignalized intersections.]]></description>
      <pubDate>Wed, 25 Feb 2026 13:59:09 GMT</pubDate>
      <guid>https://trid.trb.org/View/2657083</guid>
    </item>
    <item>
      <title>Exploring the Role of Leadership in Pedestrian Evacuations Using a Virtual Reality (VR) Environment: An Eye-Tracking Study</title>
      <link>https://trid.trb.org/View/2562111</link>
      <description><![CDATA[Effective crowd management during evacuations depends on timely information and leadership that guides evacuees to safety. This study investigates factors influencing pedestrian behavior in outdoor evacuations using immersive virtual reality (VR). Data from 27 participants, collected through eye-tracking and physiological sensors, examined leader credibility, leadership style (visual versus visual-verbal), stress levels, leader gender, familiarity, and crowd behavior. Mann-Whitney U tests analyzed attention and physiological responses, while a mixed binary logistic regression model identified predictors of leader-following. Results indicated that leader credibility and environmental familiarity were key predictors of leader-following. Participants were more likely to follow leaders in scenarios that included both visual and verbal leadership, male leaders, and normal conditions. Female participants showed a greater likelihood to follow leaders. Analysis of eye-tracking measures revealed increased attention and scanning spans among followers, particularly at night and under normal conditions. These findings highlight the importance of leadership dynamics and inform targeted strategies for effective evacuation management.]]></description>
      <pubDate>Fri, 20 Feb 2026 15:28:26 GMT</pubDate>
      <guid>https://trid.trb.org/View/2562111</guid>
    </item>
    <item>
      <title>Pupillometry in Driver Workload Assessment: Opportunities and Challenges</title>
      <link>https://trid.trb.org/View/2647803</link>
      <description><![CDATA[The analysis of a road driver’s pupillometry has been correlated with variation in stress levels induced by the environmental context. However, many other variables generally influence pupil size and the conclusions reached by some previous studies suffer from this operational difficulty.This research aims to correlate particularly stressful events during driving with changes in pupil diameter over time, proposing appropriate summary indicators that can be used in subsequent statistical or predictive analyses.In order to pursue these objectives, an experiment was set up in a simulated environment in which a user wore an eye tracker capable of detecting pupil diameter. The simulated rural road was characterized by a number of sudden events (broken-down vehicle at the roadside, child crossing the road outside the zebra crossing, etc.) and transitions between different levels of vehicle automation (Level of Assistance SAE from 0 to 2).The data collected made it possible to quantify some indicators relating to mean and standard deviation. The analysis of a single participant made it possible to inspect the pupillometric signal very carefully, assessing the width of the time window within which these indicators could be calculated. It was found that the mean is useful for representing the state of a driver for fairly long events (differences between drivers with different LoAs), whereas the standard deviation is more suitable for assessing sudden events.The proposed procedure can be used for various purposes: 1) correction and calibration of ADAS dedicated to the driver’s state of stress; 2) feeding an alert system to the user; 3) identification of elements of the road that can be the object of attention or maintenance interventions by the manager; 4) feeding a database on a web-GIS platform to highlight common behaviour among users, in correspondence with particular weather or traffic conditions.]]></description>
      <pubDate>Fri, 20 Feb 2026 15:28:25 GMT</pubDate>
      <guid>https://trid.trb.org/View/2647803</guid>
    </item>
    <item>
      <title>Decorating Sidewalls in Extra-Long Tunnels to Enhance Driving Safety: An Investigation into Its Effect on Visual Behavior</title>
      <link>https://trid.trb.org/View/2613110</link>
      <description><![CDATA[As a new type of traffic safety facility, decorated sidewall is crucial in the safety improvement and quality upgrading of tunnels. However, due to the lack of setting standards for decorated sidewall in the tunnel, the influence of various decorated sidewalls on drivers’ attention has aroused widespread controversy. To quantitatively analyze the influence of decorated sidewall on driver’s distraction in the tunnel, five schemes of decorated sidewall were designed based on driving simulation, which were compared with the tunnel without decorated sidewall. A total of 27 drivers were recruited, and high fine-grained eye-tracking data were obtained to extract evaluation indicators considering driving distraction. The differences in visual behavior characteristics under the influence of the decorated sidewall were explored by repeated measurement ANOVA. The results show that the decorated sidewall will not cause excessive distraction for the driver; the appropriate decorated sidewall has a significant effect on improving the driving level; and decorated sidewall with a rhythm of 1.27 Hz can effectively improve the driver’s alertness and visual perception, while maintaining a low visual load, demonstrating the best potential for driving safety. The research results will provide theoretical support and data guarantee for the method of using a decorated sidewall to improve the driving safety of extra-long tunnels.]]></description>
      <pubDate>Fri, 20 Feb 2026 15:28:22 GMT</pubDate>
      <guid>https://trid.trb.org/View/2613110</guid>
    </item>
    <item>
      <title>Salient Object Detection of Dynamic Night Scenes via Bio-Inspired Spotlight Attention and Hierarchical Edge-Texture Fusion</title>
      <link>https://trid.trb.org/View/2658719</link>
      <description><![CDATA[The perception of night scenes is of crucial importance for driving safety. In the dimly lit night environment, as the visibility of objects decreases, both experienced and inexperienced drivers often struggle to fully notice the objects closely related to the driving task. Moreover, because the contours of many objects are blurred in dim night, locating and detecting objects are much more difficult than that in daytime scenes, especially for the small traffic objects, which undoubtedly greatly increases the potential road hazards. Till now, there are few studies specifically focusing on the night object detection based on driver’s attention. This research is dedicated to solving the detection problem of significant objects in night scenes, particularly small salient objects. First, we constructed a Night Eye-Tracking Object Detection Dataset (NETOD), which can provide a benchmark for research on attention-driven object detection in night scenes. Then, we proposed a salient object detection model for night traffic scenes, named NS-YOLO. NS-YOLO integrates a Bio-Inspired Spotlight Attention Module (BSAM) that combines bottom-up feature enhancement with top-down semantic guidance to accurately localize salient objects. Additionally, a hierarchical multi-scale detection architecture is introduced, leveraging cross-layer feature pyramid and dynamic upsampling to enhance the detection of small objects. The experimental results on the NETOD dataset show that the proposed salient small object detection model for night traffic scenes achieved mean Average Precision (mAP) value of 93.0%, outperforming other advanced models. It has important potential application values in driver assistance, danger warning, and other aspects, and is expected to significantly improve the safety and intelligence of night driving. Beyond technical advancements, this work highlights the necessity of human-centric attention mechanisms in autonomous systems, paving the way for safer and more interpretable AI-driven vehicles.]]></description>
      <pubDate>Thu, 19 Feb 2026 10:53:39 GMT</pubDate>
      <guid>https://trid.trb.org/View/2658719</guid>
    </item>
    <item>
      <title>Takeover request (TOR) effects during different automated vehicle failures</title>
      <link>https://trid.trb.org/View/2582463</link>
      <description><![CDATA[Research on driving automation has investigated the use of takeover requests (TORs) to warn drivers about automation failures that require their intervention. Such failures can occur with information in the environment that drivers can use to anticipate them (e.g., system-limit failures) or without such information (e.g., system-malfunction failures). There is a lack of research comparing the effectiveness of TORs prior to these different failure types. We conducted a simulator study with 19 participants to investigate whether the effect of a TOR on drivers’ monitoring and takeover performance differed by failure type. Drivers were trained on automation limits so that they could identify upcoming system-limit failures. We evaluated gaze behaviors starting from 6?seconds before the failure (corresponding to TOR onset in TOR drives and the equivalent point in no-TOR drives) until drivers took over. The effect of TORs on monitoring the roadway was significant only for system-malfunction failures, with participants looking more at the roadway in TOR drives compared to no-TOR drives. For system-limit failures, the TOR did not provide any benefit in terms of visual attention to the roadway, likely because participants were already looking at the roadway because they could anticipate the failure. However, having a TOR for system-limit failures was associated with faster takeover time than not having a TOR. Although the TOR may not have had monitoring benefits when environmental information was available, our findings suggest it was still useful as a confirmation of impending failures and a prompt to take over control of the vehicle.]]></description>
      <pubDate>Thu, 19 Feb 2026 10:53:38 GMT</pubDate>
      <guid>https://trid.trb.org/View/2582463</guid>
    </item>
    <item>
      <title>Driver Focus Detection Based on Eye Tracking Images Using Convolutional Neural Networks: A Comparative Study of Transfer Learning and Custom Architectures</title>
      <link>https://trid.trb.org/View/2642301</link>
      <description><![CDATA[Driver distraction is a major contributor to traffic accidents worldwide, prompting the need for real-time focus detection systems. This study proposes a deep learning-based approach to classify driver attention using eye movement data captured through eye tracking. A new dataset was collected from 15 respondents during simulated driving sessions, from which 32,362 images were extracted and labeled as either focused or unfocused. Three convolutional neural network (CNN) models were developed and evaluated: two based on transfer learning (Inception V3 and MobileNet V2), and one full learning model using architecture optimization via the Taguchi method. The Inception V3 model achieved the highest classification performance, with an average accuracy of 76.78% and an F1 score of 0.76. The custom full learning model achieved 71.88% accuracy with the shortest inference time, while MobileNet V2 yielded the lowest performance. These findings demonstrate that eye-tracking images can serve as effective input for visual attention modeling in driver monitoring systems. The results also highlight a trade-off between accuracy and efficiency, offering valuable insights for real-time applications in intelligent transportation.]]></description>
      <pubDate>Wed, 18 Feb 2026 08:51:10 GMT</pubDate>
      <guid>https://trid.trb.org/View/2642301</guid>
    </item>
    <item>
      <title>Final report on workload measures</title>
      <link>https://trid.trb.org/View/2666555</link>
      <description><![CDATA[Identifying possibly safety-impacting situations of too high or too low workload (overload or underload) is critical for various operators; given the tasks of an air traffic controller (ATCO), this applies in particular to air navigation service providers (ANSPs) following the implementation of changes that alter task definitions. However, to identify overload or underload, we need to be able to measure an operator's workload (WL). Unfortunately, WL is a subjective measure: it measures the subjective, experienced cognitive demand during a task. Assessing an operator's - and particularly an ATCO's - WL has been a longstanding research topic, and researchers have reverted to controller self-assessment using numeric scales. These methods suffer from various drawbacks (the query is intrusive, social bias may impact the self-assessment, and small WL variations cannot be recorded). In this project, we aim to make progress towards the development of objective, non-intrusive WL measures. We conduct an ambitious study, including n =18 ATCOs and 54 en-route scenarios, with the aim to reduce the impact of the numeric scales for WL assessment, by also recording various promising WL-indicator candidates (e.g., eye-gaze measures) and then analyze the validity of these objective indicators. We demonstrate the significant potential of ML techniques for predicting ATCO WL. Our findings highlight the efficacy of using eye-tracking data, either in conjunction with head-movement data or independently, for WL prediction. We yield results with accuracy rates reaching as high as 96% (F1- score=84%) in correctly predicting instances of high EEG-based WL level and 86% accuracy (F1- score=77%) in predicting three different levels of WL. Reducing the number of features (NoF) generally results in a slight decrease in model performance, but significantly reduces measurement and computational efforts. In the most favorable scenario only 6 features were needed instead of 58, with an accuracy of 82% compared to 85% and an F1-score of 74% compared to 73%.]]></description>
      <pubDate>Thu, 05 Feb 2026 08:33:50 GMT</pubDate>
      <guid>https://trid.trb.org/View/2666555</guid>
    </item>
    <item>
      <title>Computational models for safe interactions between automated vehicles and cyclists</title>
      <link>https://trid.trb.org/View/2666519</link>
      <description><![CDATA[Cyclists, as vulnerable road users, face significant safety risks in traffic, especially at unsignalized intersections where they must interact with motorized vehicles. This PhD thesis investigated bicycle-vehicle interactions at unsignalized intersections and developed predictive models to improve active safety systems and automated driving. The research integrates naturalistic and simulator data to model the behavior of both cyclists and vehicles at intersections. The models included kinematic factors, non-verbal communication, and glance behavior. The studies included in this thesis revealed that kinematic factors, such as time to arrival (DTA), along with cyclists' non-verbal cues, like head movements and pedaling, significantly affect yielding behavior at intersections. Both simulator data and naturalistic data confirmed that visibility conditions and DTA played a critical role in cyclists' decision-making while subjective data from questionnaires highlighted the importance of communication and eye contact between cyclists and drivers in reducing the severity of interactions. Additionally, an analysis of naturalistic data uncovered differences in yielding behavior between professional and non-professional drivers, with professional drivers being less likely to yield to cyclists. Different models, leveraging machine learning and game theory, were developed to predict yielding decisions during these interactions. Lastly, simulator data was used to model drivers' behavior, incorporating kinematics, demographics, and gaze metrics to predict drivers' responses to crossing cyclists. The predictive models developed through this research provide novel insights for the design of threat assessment algorithms for active safety and automated driving, enhancing the machine ability to anticipate cyclist behavior and improve safety.]]></description>
      <pubDate>Thu, 05 Feb 2026 08:33:09 GMT</pubDate>
      <guid>https://trid.trb.org/View/2666519</guid>
    </item>
    <item>
      <title>Evaluation of eye-tracking as support in simulator training for maritime pilots</title>
      <link>https://trid.trb.org/View/2666514</link>
      <description><![CDATA[The Swedish Maritime Administration provides maritime pilotage when vessels operate in Swedish pilotage-obliged water. Through the maritime pilot's knowledge of the waterways and experience of maneuvering different types of vessels, the pilot contributes to ensure that maritime and environmental safety as well as accessibility can be maintained. In addition to skills in ship maneuvering, navigation and seamanship, the ability to interact with various types of technology, cultures and crews is also required. Each ship is unique in terms of propulsion, steering, navigation, and communication equipment as well as maneuvering and information instruments. With increased levels of automation, the demands on maritime pilots to interpret, understand and handle technology are increasing. Today, the maritime pilot training is based on a long tradition of apprenticeship, where the pilot's competence can be seen as implicit (tacit) knowledge developed through years of experience at sea. But, since the maritime pilot profession is a practice in change, it puts higher demands on the pilot training. One step is to find out the experienced maritime pilots' valuable tacit knowledge and transfer this to the next generation. Another step is to include new technology in teaching activities, such as using eye-tracking in simulator training. The purpose of this multidisciplinary research project was to investigate what it means to be a professionally competent maritime pilot, and how current training practitioners are organized for pilot students to develop professional competence. Also, how the training can be further developed to achieve improved quality.]]></description>
      <pubDate>Thu, 05 Feb 2026 08:33:03 GMT</pubDate>
      <guid>https://trid.trb.org/View/2666514</guid>
    </item>
    <item>
      <title>Evaluating Pedestrian Behavior in Virtual Reality for Traffic Education Using Eye Tracking</title>
      <link>https://trid.trb.org/View/2580251</link>
      <description><![CDATA[Pedestrian safety is critical in urban environments, where the rise of electric vehicles and pervasive distractions contribute to complex traffic conditions. This study examines pedestrian behavior through a Virtual Reality (VR) simulation game, designed to replicate real-world urban scenarios. The simulation includes diverse pedestrian crossing situations, incorporating conventional vehicles, electric vehicles (EVs), and scenarios with technological interventions like LED-lit crosswalks. By integrating eye-tracking technology, the VR platform provides insights into how pedestrians react to visual and auditory cues, especially under conditions of auditory deprivation from silent EVs and environmental distractions. The study recruited 50 participants to explore their decision-making processes across these scenarios. Results indicate that while visual cues are dominant, auditory distractions and silent EVs significantly impair safety decisions. Furthermore, technological interventions like LED crosswalks improved pedestrian focus and safety. These findings underscore the need for urban infrastructure improvements and training strategies to address evolving challenges in pedestrian safety.]]></description>
      <pubDate>Thu, 29 Jan 2026 17:02:25 GMT</pubDate>
      <guid>https://trid.trb.org/View/2580251</guid>
    </item>
    <item>
      <title>Bifocal Effect of Gaze on Crossing Behavior</title>
      <link>https://trid.trb.org/View/2580250</link>
      <description><![CDATA[A discrepancy exists regarding whether or not the other’s gaze is utilized to determine crossing in an intersection. In this study, the authors developed a set of movies that were designed to control for the gaze factor and examined its effect on estimation of the other’s intention. The findings from two experiments demonstrated that the other’s gaze affects the partners’ understandability positively and negatively depending on his/her subsequent behavior, indicating gaze has a bifocal effect on the partners’ understandability.]]></description>
      <pubDate>Thu, 29 Jan 2026 17:02:25 GMT</pubDate>
      <guid>https://trid.trb.org/View/2580250</guid>
    </item>
    <item>
      <title>Effectiveness of Information Presentation by Gaze Timing Consistent with Roles in Collision Avoidance</title>
      <link>https://trid.trb.org/View/2580244</link>
      <description><![CDATA[Recently, due to the miniaturization and automation of mobility, there has been an increase in interaction with automated small-sized mobility. The purpose of this study is to verify the effect of information presentation using gaze by automated small-sized mobility on interaction with others. Previous studies have shown that the way in which gaze is used differs depending on whether the role is to pass through the collision point first or second in collision avoidance between pedestrians. In this study, the authors examined the impact of information presentation that reproduces this gaze usage on the extent to which participants understand the intentions of unknown agents. The results showed that presenting information at gaze timings that matched the agent’s role facilitated the participants’ understanding of the agent’s intentions. Specifically, it was shown that in the case of passing the collision point first, it was effective information presentation to gaze for a short time at the beginning of the collision avoidance situation. Meanwhile, in the case of passing the collision point later, it was most effective information presentation to gaze for a long time until the collision point is reached. This study suggests the need to consider ecologically valid timing when implementing gaze in mobility.]]></description>
      <pubDate>Thu, 29 Jan 2026 17:02:25 GMT</pubDate>
      <guid>https://trid.trb.org/View/2580244</guid>
    </item>
    <item>
      <title>Relationships Between Pilot Gaze Patterns and Control Lapses in Challenging Instrument Approach Flight</title>
      <link>https://trid.trb.org/View/2601625</link>
      <description><![CDATA[This study investigates the link between pilot gaze patterns and control deviations during simulated instrument flight approaches, aiming to understand how attentional focus impacts flight safety. Building on previous research that underscores the importance of situational awareness in aviation, this work delves into the specific aspects of gaze behavior as indicators of cognitive workload and attentional distribution. Unlike prior studies that broadly assessed attentional markers, this research utilizes advanced iris tracking technology to pinpoint how gaze metrics correlate with flight performance. Employing a quantitative research design, the study analyzed over 45 simulated instrument flight approaches conducted by 15 pilots. Detailed metrics of gaze behavior, including fixation duration, saccade rate, scanning patterns, and gaze variability, were captured using iris tracking technology. A novel metric, exceedance autocorrelation, was introduced to assess aircraft control deviations and pilot performance. Statistical analyses uncovered significant correlations: fixation duration exhibited a strong positive correlation with control quality (r = 0.684), and saccade rate correlated even more strongly (r = 0.827). Conversely, scan pattern predictability and gaze variability correlated negatively (r = −0.55) and positively (r = 0.546), respectively, highlighting their relevance under high cognitive workload conditions. The results confirms that specific gaze behaviors are critical indicators of pilot performance in challenging flight scenarios. These insights advance the understanding of pilot attentional mechanisms, offering valuable implications for enhancing aviation safety through improved training programs and cockpit technology design.]]></description>
      <pubDate>Thu, 29 Jan 2026 17:02:25 GMT</pubDate>
      <guid>https://trid.trb.org/View/2601625</guid>
    </item>
    <item>
      <title>Estimating Situational Awareness and Predicting Gaze Allocation Strategies for Pedestrians Using a Markov Model</title>
      <link>https://trid.trb.org/View/2659374</link>
      <description><![CDATA[This study aims to measure and compare the situational awareness of pedestrians while crossing the road at unsignalized and signalized intersections in real-world conditions. The measurement of situational awareness employs Markov gaze entropy, and the obtained transition probability matrix is used to comprehend gaze transition behavior. This study conducted field experiments at an unsignalized and a signalized intersection with volunteer participants. Situational awareness of pedestrians was analyzed considering six areas of interest: Vehicles, Fellow Pedestrians, Near Path, Focus of Expansion, Road Infrastructure (RI), and Non-Traffic Relevant Objects (NTRO). Results of this study indicate that pedestrians at the signalized intersection exhibited lower situational awareness (i.e., higher entropy) than those at the unsignalized intersection. Additionally, higher crossing initiation times and pedestrian speed were associated with lower situational awareness at an unsignalized intersection. At the signalized intersection, pedestrians initiating crossing in 3 to 6 s exhibited least Markov Entropy, indicating high situational awareness. Furthermore, an increase in pedestrian speed was associated with increased situational awareness. Moreover, pedestrians at the unsignalized and signalized intersections exhibited more gaze transitions from NTRO to vehicles and from vehicles to NTRO. However, the highest average gaze transition probability (i.e., 49.6%) at the signalized intersection was observed between NTRO and vehicles, and the next highest probability (43.1%) was observed between RI and vehicles. Overall, these study findings can help gain insights into how pedestrians visually explore their surroundings and make decisions while crossing the road. This information can be valuable for designing safer and more efficient intersections, thus improving pedestrian safety.]]></description>
      <pubDate>Tue, 27 Jan 2026 17:08:28 GMT</pubDate>
      <guid>https://trid.trb.org/View/2659374</guid>
    </item>
  </channel>
</rss>