<rss version="2.0" xmlns:atom="https://www.w3.org/2005/Atom">
  <channel>
    <title>Transport Research International Documentation (TRID)</title>
    <link>https://trid.trb.org/</link>
    <atom:link href="https://trid.trb.org/Record/RSS?s=PHNlYXJjaD48cGFyYW1zPjxwYXJhbSBuYW1lPSJkYXRlaW4iIHZhbHVlPSJhbGwiIC8+PHBhcmFtIG5hbWU9InN1YmplY3Rsb2dpYyIgdmFsdWU9Im9yIiAvPjxwYXJhbSBuYW1lPSJ0ZXJtc2xvZ2ljIiB2YWx1ZT0ib3IiIC8+PHBhcmFtIG5hbWU9ImxvY2F0aW9uIiB2YWx1ZT0iMCIgLz48L3BhcmFtcz48ZmlsdGVycz48ZmlsdGVyIGZpZWxkPSJpbmRleHRlcm1zIiB2YWx1ZT0iJnF1b3Q7U3RydWN0dXJlZCBRdWVyeSBMYW5ndWFnZSAgKCBTUUwgKSAmcXVvdDsiIG9yaWdpbmFsX3ZhbHVlPSImcXVvdDtTdHJ1Y3R1cmVkIFF1ZXJ5IExhbmd1YWdlIChTUUwpJnF1b3Q7IiAvPjwvZmlsdGVycz48cmFuZ2VzIC8+PHNvcnRzPjxzb3J0IGZpZWxkPSJwdWJsaXNoZWQiIG9yZGVyPSJkZXNjIiAvPjwvc29ydHM+PHBlcnNpc3RzPjxwZXJzaXN0IG5hbWU9InJhbmdldHlwZSIgdmFsdWU9InB1Ymxpc2hlZGRhdGUiIC8+PC9wZXJzaXN0cz48L3NlYXJjaD4=" rel="self" type="application/rss+xml" />
    <description></description>
    <language>en-us</language>
    <copyright>Copyright © 2026. National Academy of Sciences. All rights reserved.</copyright>
    <docs>http://blogs.law.harvard.edu/tech/rss</docs>
    <managingEditor>tris-trb@nas.edu (Bill McLeod)</managingEditor>
    <webMaster>tris-trb@nas.edu (Bill McLeod)</webMaster>
    
    <item>
      <title>Research on the Application of Data Mining in Logistics Enterprise</title>
      <link>https://trid.trb.org/View/2282446</link>
      <description><![CDATA[With the rapid development of database technology and the wide application of database management system, data mining is more and more used in logistics enterprise. It is a key problem how to use data mining with low cost and high efficiency. After illustrating the technique, algorithm and tools of data mining, the application of data mining in logistics enterprise can be classified into four categories. Choose the most efficient mining algorithm according to the need and characteristics of the application. At the end, materialize the process of application based on SQL Server 2005 and then the model of process is given. Obviously, data mining has great significance and value in logistics enterprise.]]></description>
      <pubDate>Wed, 27 Mar 2024 11:52:26 GMT</pubDate>
      <guid>https://trid.trb.org/View/2282446</guid>
    </item>
    <item>
      <title>Anomaly Detection and Quality Indicators for Digital Maps Used in ADAS Applications</title>
      <link>https://trid.trb.org/View/2334698</link>
      <description><![CDATA[With the evolution of Advanced Driver Assistance Systems (ADAS), the gap towards Autonomous Driving (AD) is continuously narrowing. This progress is made possible using digital maps as one of the critical sources along with other ADAS sensors. Correct map data is crucial for the proper functioning of ADAS functions. This demands the need to evaluate the correctness of the map data regularly and efficiently. This work proposes a framework to quantify the map data correctness systematically. The framework algorithmically detects error locations in a map database and then derives KPIs from these error locations. The framework helps to identify issues in the map data related to the internal data consistency or heuristic rules. The framework consists of process automation in Python and map database checks in SQL. The proposed framework defines validation methodology that achieves goals like: (1) KPIs for map data reliability (2) systematic error identification. The framework was evaluated with maps from various sources. The framework yields results quickly and efficiently so that it can be regularly executed well before vehicle testing. In addition, the efficient KPI calculation permits the control of relevant map properties over subsequent map releases.]]></description>
      <pubDate>Tue, 05 Mar 2024 17:12:18 GMT</pubDate>
      <guid>https://trid.trb.org/View/2334698</guid>
    </item>
    <item>
      <title>Securing transportation web applications: An AI-driven approach to detect and mitigate SQL injection attacks</title>
      <link>https://trid.trb.org/View/2320562</link>
      <description><![CDATA[Cybersecurity is a critical concern in the transportation sector, where web applications play a pivotal role in managing essential services and sensitive data. Among the various cyber threats, SQL injection attacks pose a significant risk, potentially leading to unauthorized access, data breaches, and disruption of transportation systems. To address this challenge, an advanced approach is proposed that combines Artificial Intelligence (AI) techniques and Natural Language Processing (NLP) to detect and mitigate SQL injection attacks in transportation web applications. In the data collection phase, a comprehensive dataset of real-world attack instances is selected from publicly available sources specializing in cybersecurity datasets. The dataset includes a diverse range of attack vectors and addresses the issue of class imbalance by incorporating both successful and unsuccessful attack attempts. The preprocessing step involves employing NLP techniques to transform the textual input data into a suitable format for AI-based detection. Tokenization, stop-word removal, and stemming are applied to ensure the model effectively analyze and recognize attack patterns. For detection, a logistic regression model is utilized to estimate the probability of a successful SQL injection attack based on the relevant features. Oversampling and undersampling techniques are employed to handle class imbalance and improve the model’s performance. Additionally, feature selection techniques are implemented to reduce noise and enhance pattern recognition. The evaluation of our proposed approach demonstrates a remarkable accuracy detection rate of 99.97%, indicating the model's high capability to identify SQL injection attacks. The precision and recall values further validate the model’s effectiveness in correctly detecting successful attacks and minimizing false positives. The success of our approach lies in its ability to integrate AI and NLP techniques effectively, offering a more robust and reliable solution for detecting and mitigating SQL injection attacks in transportation web applications. By addressing the limitations and exploring future research directions, our approach holds promise in bolstering cybersecurity measures and safeguarding critical transportation infrastructure from evolving cyber threats.]]></description>
      <pubDate>Thu, 22 Feb 2024 09:06:19 GMT</pubDate>
      <guid>https://trid.trb.org/View/2320562</guid>
    </item>
    <item>
      <title>Division of Planning Research On-Call (ROC) Task#12 - Survey of Risk Management Policies for Transportation Agencies</title>
      <link>https://trid.trb.org/View/2310548</link>
      <description><![CDATA[The project and risk management policies and strategies used by departments of transportaton (DOTs) across the country were evaluated. The purpose is to gain insight into how they measure and manage project development and delivery, and ultimately obtain ideas to improve Ohio Department of Transportation (ODOT)'s own project management methods. This project entailed a scan of state DOT online documents, a survey of state DOTs and follow-up interviews with selected states. Main findings of this project are: " Power BI and SQL is a commonly used tool amongst DOTs. However, most of the sampled and interviewed DOTs rely on spreadsheets for communications " Several DOTs reported the use of Monte Carlo Simulation for cost estimation. More importantly for this project, they use this method for identification and ranking of risks in a project " A common recommendation from interviews around project risk management is keep it simple and easy to communicate.]]></description>
      <pubDate>Thu, 21 Dec 2023 09:21:57 GMT</pubDate>
      <guid>https://trid.trb.org/View/2310548</guid>
    </item>
    <item>
      <title>An Ontology-Based Approach for Pavement Crack Treatment Knowledge Base Development</title>
      <link>https://trid.trb.org/View/1743814</link>
      <description><![CDATA[Crack treatment is a widely performed maintenance operation for in-service asphalt concrete pavements. The knowledge-based system contains crack treatment knowledge can assist pavement engineers in crack treatment selection. However, the knowledge base development process is usually time-consuming and involves massive repetitive labour. In this paper, the authors propose an ontology-based approach for efficient and effective knowledge base development. Firstly, the authors choose the Federal Highway Administration (FHWA) reports as the knowledge source. Secondly, the crack treatment knowledge base is developed by constructing a crack ontology model and a crack treatment rules repository. The crack ontology model is constructed by using OWL, and the crack treatment rules repository is constructed by using SWRL. Finally, the crack treatment knowledge base is implemented and validated on the Protégé software. The recommendation results for crack examples are obtained by using SQWRL, a semantic query method. The presented ontology-based approach achieves the formal expression of pavement domain concepts and facilitates the development process of crack treatment knowledge base. In addition, the ontology-based crack treatment knowledge base can achieve intelligent treatment recommendation and enable the sharing and reusing of the expertise across different systems.]]></description>
      <pubDate>Wed, 03 Feb 2021 15:00:47 GMT</pubDate>
      <guid>https://trid.trb.org/View/1743814</guid>
    </item>
    <item>
      <title>WVDOH Web-Accessible Crash Database Deployment</title>
      <link>https://trid.trb.org/View/1300215</link>
      <description><![CDATA[Electronic crash reports that are completed by state, county, and local police officers in West Virginia are transmitted electronically to a central system provided by VS Visual Statement Incorporated.  The crash reports can be accessed using their ReportBeam collision reporting system.  Due to the limitations of the ReportBeam system for analyzing the crash data for safety applications, the West Virginia Division of Highways downloads the crash records from ReportBeam and utilizes Microsoft Access to generate various reports and conduct analysis.  As the amount of crash records in the database continue to increase, the ability of Microsoft Access to manage this data decreases.  In order to maximize the value and accessibility of the available crash data, the West Virginia Department of Highways (WVDOH) has initiated this project to deploy an online relational Structured Query Language (SQL) database.]]></description>
      <pubDate>Wed, 26 Feb 2014 01:00:36 GMT</pubDate>
      <guid>https://trid.trb.org/View/1300215</guid>
    </item>
    <item>
      <title>A Decision Support System with the Plan of Loading and Reinforcing Dimension Freight</title>
      <link>https://trid.trb.org/View/1113903</link>
      <description><![CDATA[The plan of loading and reinforcing dimension freight is the key point of rail transportation. The features of it are  technically difficult, high standard of security and so on. For a long time, it is been inaccurate and wastes workforce hours. In this paper The Loading and Reinforcing Dimension Freight Support Decision System with enhanced human-computer interaction is developed by the computer language VC++ and database structured query language (SQL). This system can optimize the choice of vehicle and the feasible loading and reinforcing plan can be established by computer decision support system quickly, which will reduce the amount of calculation work and minimize the errors which are calculated by manpower.]]></description>
      <pubDate>Tue, 25 Sep 2012 09:24:28 GMT</pubDate>
      <guid>https://trid.trb.org/View/1113903</guid>
    </item>
    <item>
      <title>The Application of Database Techniques in the Integrated Vessel Information Service System</title>
      <link>https://trid.trb.org/View/1113943</link>
      <description><![CDATA[Integrated Vessel Information Service System (IVISS) was a database system which collected and processed AIS (Automatic Identification System) information, and provided information service to clients. AIS information database was the core of the whole system. Large volume of data, the requirement of hard real-time capability, parallel access and wide physical distribution were features of the system, which gave some difficulties to efficient operations of database. In the paper, several database techniques were studied, including mainly data partitions, indexes, transactions and locks, procedures and functions, replications, and implied to IVISS based on features of AIS information database using SQL Server 2005 as a tool. Comprehensive applications of these techniques improved the whole performance of database, obviously increased the connection number and access rate, and also can provided good guides for other development of large database.]]></description>
      <pubDate>Tue, 18 Sep 2012 08:54:21 GMT</pubDate>
      <guid>https://trid.trb.org/View/1113943</guid>
    </item>
    <item>
      <title>Retrieving Transportation Information with Constraint SQL and Constraint Datalog</title>
      <link>https://trid.trb.org/View/1102741</link>
      <description><![CDATA[Users of transportation information have to recreate how they retrieve their information when managers of the information make changes.  Managers must choose between burdening the users with the change and burdening themselves with the current state.  The costs associated with these choices are the cost of having the users recreate how they retrieve their information and the cost of not making the change.  If, however, users did not have to recreate how they retrieve their information when the managers of the information make changes, the problem could be avoided.  This dissertation presents such a solution for avoiding the problem.  The approach developed is applied to the data languages Constraint SQL and Constraint Datalog.  Ultimately, managers should be able to implement changes without affecting users' queries.]]></description>
      <pubDate>Fri, 20 May 2011 16:29:56 GMT</pubDate>
      <guid>https://trid.trb.org/View/1102741</guid>
    </item>
    <item>
      <title>NCDOT Quality Control Methods for Weigh-in-Motion Data</title>
      <link>https://trid.trb.org/View/1090698</link>
      <description><![CDATA[The North Carolina Department of Transportation (NCDOT) collects weigh-in-motion (WIM) data using procedures and systems consistent with recommended industry practices.  The NCDOT WIM systems are designed to estimate static vehicle axle weights based on dynamic traffic measurements. Regardless of the technology used, data errors and poor quality data are captured, which makes a quality control (QC) process an important part of all WIM data systems. This article describes the NCDOT WIM QC procedures. WIM data must undergo a series of sequential and well-defined QC procedures to ensure that the data meet the federal requirements and new standards for the Mechanistic Empirical Pavement Design Guide (MEPDG) process. After a literature review and consideration of prototype procedures, the authors concluded that the most efficient method of performing the WIM QC at NCDOT included structured query language (SQL) queries in a front-end database system applied to raw data stored in live back-end databases.  The QC technique uses a combination of rule-based checks and manual audits of plots and reports.  The NCDOT WIM QC process was applied to 45 WIM stations, which were checked for class and weight data anomalies.  Findings show that the proposed process provided reliable data sets for use in developing the MEPDG traffic inputs for the NCDOT.  Recommendations for future research are given.]]></description>
      <pubDate>Mon, 28 Feb 2011 07:36:18 GMT</pubDate>
      <guid>https://trid.trb.org/View/1090698</guid>
    </item>
    <item>
      <title>LONG-TERM PAVEMENT PERFORMANCE INFORMATION MANAGEMENT SYSTEM PAVEMENT PERFORMANCE DATABASE USER REFERENCE GUIDE</title>
      <link>https://trid.trb.org/View/697916</link>
      <description><![CDATA[This document provides information to aid in understanding and using the Long-Term Pavement Performance (LTPP) program pavement performance database.  This document provides an introduction to the structure of the LTPP program, the relational structure of the LTPP database, a description of the location of various data elements, contents of the data table, tips on efficient means of manipulating data for specific types of investigations, how to obtain data, and example Structured Query Language (SQL) scripts that can be used to build user-defined custom extractions.]]></description>
      <pubDate>Fri, 07 May 2004 00:00:00 GMT</pubDate>
      <guid>https://trid.trb.org/View/697916</guid>
    </item>
    <item>
      <title>OHIO'S BASE TRANSPORTATION REFERENCING SYSTEM (BTRS), BRINGING ENTERPRISE GIS TO THE OHIO DEPARTMENT OF TRANSPORTATION</title>
      <link>https://trid.trb.org/View/646218</link>
      <description><![CDATA[In 1999 The Ohio Department of Transportation (ODOT) began an effort to seamlessly integrate corporate enterprise management systems with ODOT's mature geographic information system (GIS). This effort was entitled the Base Transportation Referencing Systems (BTRS). A few of the integrated systems are: Project Development Management System (PDMS), Construction Management System (CMS), Pavement Management Systems(PMS); in all 11 large Enterprise Management systems were geo-referenced. This project not only updated the systems, but also kept them updated as the underlying road networks were modified. This presentation will cover the methodologies used to bring corporate data to not only standard SQL reporting tools, but to GIS as well. New systems and products based upon the BTRS standard will be demonstrated.]]></description>
      <pubDate>Fri, 13 Jun 2003 00:00:00 GMT</pubDate>
      <guid>https://trid.trb.org/View/646218</guid>
    </item>
    <item>
      <title>HIGHWAY SAFETY INFORMATION SYSTEM GUIDEBOOK FOR THE CALIFORNIA DATA FILES. VOLUME I: SAS FILE FORMATS. 4TH EDITION</title>
      <link>https://trid.trb.org/View/707900</link>
      <description><![CDATA[The California database incorporated in the Highway Safety Information System (HSIS) is derived from the California TASAS (Traffic Accident Surveillance and Analysis System).  The system, maintained by the Traffic Operations Office of Caltrans, is a mainframe-based system based on COBOL programming.  The Traffic Operations Office provides the data to HSIS in the form of two different data files.  These contain (1) accident data and (2) roadway inventory data.  Beginning in 1994, the HSIS was converted to a relational database for internal use.  This database, using a SYBASE system, stores the data received from California and other States, and the data files for a given State are linked and manipulated using SQL language.  However, this conversion from the original SAS-based system to the newer relational system is somewhat transparent to the end-user of the data since the output files produced by SYBASE for modeling and analysis will be SAS formatted.  SAS format libraries are produced for each of the variables in each of the files.  This Guidebook concerns these SAS files - their formats, completeness, and quality.  This report, Volume I: SAS File Formats, contains the following: Introduction; Details of Major Files; California Contacts; Composite List of Variables; Accident File; Roadlog File; Intersection File; and Interchange Ramp File.  The Single Variable Tabulations are found in Volume II of this report.]]></description>
      <pubDate>Tue, 07 May 2002 00:00:00 GMT</pubDate>
      <guid>https://trid.trb.org/View/707900</guid>
    </item>
    <item>
      <title>HIGHWAY SAFETY INFORMATION SYSTEM GUIDEBOOK FOR THE CALIFORNIA DATA FILES. VOLUME II: SINGLE VARIABLE TABULATIONS. 4TH EDITION</title>
      <link>https://trid.trb.org/View/707901</link>
      <description><![CDATA[The California database incorporated in the Highway Safety Information System (HSIS) is derived from the California TASAS (Traffic Accident Surveillance and Analysis System).  The system, maintained by the Traffic Operations Office of Caltrans, is a mainframe-based system based on COBOL programming.  The Traffic Operations Office provides the data to HSIS in the form of two different data files.  These contain (1) accident data and (2) roadway inventory data.  Beginning in 1994, the HSIS was converted to a relational database for internal use.  This database, using a SYBASE system, stores the data received from California and other States, and the data files for a given State are linked and manipulated using SQL language.  However, this conversion from the original SAS-based system to the newer relational system is somewhat transparent to the end-user of the data since the output files produced by SYBASE for modeling and analysis will be SAS formatted.  SAS format libraries are produced for each of the variables in each of the files.  This Guidebook concerns these SAS files - their formats, completeness, and quality.  This report, Volume II, contains the single variable tabulations.  Volume I, SAS File Formats, contains an introduction, details of major files, California contacts, a composite list of variables, accident files (accident subfile, vehicle subfile, and occupant subfile), roadlog file, intersection file, and interchange ramp file.]]></description>
      <pubDate>Tue, 07 May 2002 00:00:00 GMT</pubDate>
      <guid>https://trid.trb.org/View/707901</guid>
    </item>
  </channel>
</rss>