Preface

Users of Overhead Line Design Software are faced with many different choices of vendors and options in their software to help achieve code compliance and meet utility standards. Users desire to know if their choice of software will ensure they meet code and standard compliance. Suppliers of these software wish to assure their customers that this would be true. Until now there has been no framework for users or vendors to question or communicate in a complete way.


This document provides a framework that attempts to seriously critique vendor software solutions in many different ways, plus provide a mechanism that permits several software and real-world validation tests to be compared side by side. By doing so it becomes transparent to users exactly what a software will do and what it will not; allowing users to make informed choices about how/if the software fits within their overall Line Design Process. Lack of software functionality, complicated steps or poor performance in some tests will become more visible to both users and vendors in this framework; allowing vendors to focus development in certain areas and permitting users to make adjustments to their process in the meantime. A responsible vendor should willingly participate in this framework exercise in order to support the end user's due diligence requirement.


Scope

The focus of this document is all about overhead power, joint-use and communication lines; typically run on single pole structures of various material types (wood, concrete, steel, composite...) The geographic scope is North America, as the CSA and NESC/GO95 code standards are so similar. It is OK if a software vendor only wishes to address only a subset of these code standards. Where applicable, deterministic and static loads are considered. Specifically, reliability-based loading, broken wire tests and dynamic responses are not currently within the scope of this work.


Single and multi-pole structural analysis will include Linear and Nonlinear methodologies.



Results Repository

To start, each software solution should build the required databases with test results documented in this framework. Comparison between software should be possible if the databases are shared. Sharing and cross reporting/comparing should be possible, if and when users desire. There is no requirement for them to be publicly available, but that would be in the best interests of end users.


In order to limit the possibility of errors; users, consultants or vendors with expertise in certain software will be asked to populate those particular software results and detail the required software options. These results will be collected in a database, which is expected to be updated as test results come in, and vendor software is updated. 


Where available, results from real-world tests will be included in a separate database to help validate the other results collected.


A User Forum is anticipated that will collaborate to help ensure the tests performed are adequate, meaningful and reasonably comparable between software solutions. Open and fair assessments is the goal in support of strong engineering due diligence.