Skip to main content

The Working Paper | Winter 2018

January 25, 2018

Chair's Corner | Linda Triplett (Mississippi)

Happy New Year! While the NLPES new year officially begins with the fall Professional Development Seminar, I am excited about the opportunities that we have ahead of us in the new year, however you choose to measure it. Because a new year marks a new beginning, it encourages us to renew our efforts, to try harder, achieve more and maybe even think about approaching things in a new way.

First of all, what a great beginning to the NLPES new year! The Wisconsin Legislative Audit Bureau, under the leadership of Joe Chrisman, state auditor, set a new high bar for an NLPES PDS. The outstanding programming put together by LAB staff attracted a large and enthusiastic turnout of 135 legislative program evaluators and performance auditors from 31 states and territories. The gorgeous and beautifully renovated Wisconsin State Capitol, observing its 100th anniversary, was the perfect backdrop for our opening session.

Those of us who were fortunate enough to attend the 2017 PDS in Madison heard opening remarks from NCSL Staff Chair Chuck Truesdell followed by presentations from outstanding and motivational speakers, including legislators serving on Wisconsin’s Joint Legislative Audit Committee, professors from the renowned La Follette School of Public Affairs of the University of Wisconsin-Madison and a professional leadership trainer. In addition to the many high-quality external speakers, our colleagues representing legislative program evaluation and performance audit shops from around the country shared their expertise on how to conduct evaluations in a variety of policy areas, as well as how to use technical tools and soft skills to improve the rigor and impact of our work. 

Looking to the future, our Professional Development Committee already has applied for and received NCSL funding for two webinars. One of the webinars will explain the methodology that the Utah Office of the Legislative Auditor General used in its NLPES Excellence in Research Methods Award winning report: "A Performance Audit of the Department of Financial Institution’s Regulation of the Payday Loan Industry." The second webinar will focus on how to use data visualization techniques to improve the messaging of our reports and presentations.

Finally, while we are all looking forward to next year’s PDS on Sept. 9-12 in New Orleans, I also wanted to give you a heads up about the 2020 Super PDS that we have agreed to participate in at a yet to be determined location. Most NCSL staff sections have agreed to participate. In talking with staff from other staff sections at the Legislative Staff Coordinating Committee meetings, it became apparent that there is overlap in our job responsibilities and interests that has the potential to create an interesting synergy at a Super PDS.

In closing, I welcome your suggestions as to how we can better serve you in 2018.

Linda Triplett is the 2017–2018 NLPES Executive Committee chair. She can be reached at [email protected]. 

Ask the Expert 

This column is a new addition to the newsletter as a way for our members to reach out to senior level professionals in the performance audit/program evaluation community and ask questions. Our expert for this issue is Kirby Arinder, Ph.D., a research methodologist for the Mississippi Joint Legislative PEER Committee. He can be reached at [email protected].

Q. How do you approach a project when it is not possible to thoroughly cover the subject in the time provided? And how do you deal with incomplete, inaccurate or otherwise "bad" data from agencies or entities? 

A. Drawing the right conclusions is easy in textbooks and test questions. When you know all and only the relevant information, you essentially face a Sudoku puzzle. It may be complicated, but you know there’s only one coherent answer and you know that it can be found given what you have. 

But the real world doesn’t offer such assurances. Almost inevitably, you’re dealing with partial data or bad data or some combination thereof. You don’t have time or resources to do a convincingly thorough literature review, or you know that you’ve received only part of a dataset, or there is reason to suspect that the dataset you have is at odds with the facts. You don’t want to just declare ignorance—so what do you do?

I believe there is a good answer, and it rests upon one primary and three subsidiary methodological imperatives. None of them is technical, and you’re probably already doing some or all of them. But I think there’s value in making them explicit. So:

Aim First at Inferential Strenth, Not Truth

The first imperative is probably the most counterintuitive. We think of what we do as trying to present the truth, and that is an important goal. But you shouldn’t primarily aim at truth; rather, you should aim at inferential strength, at which point you’ll get truth more often as a side effect. (There’s a case to be made for this claim, but not here; space forbids.)

Inference in general is the derivation of a conclusion from premises. For example:

  • If I know that all men are mortal, and that Socrates is a man, and from those facts conclude that Socrates is mortal.
  • If I clearly remember seeing my bedside clock at 10:32 pm and again at 11:44 pm, but don’t recall any of the time between those memories, and conclude that I was abducted by aliens during the missing time.
  • If I see that a random sample of 1,000 people included 450 in favor of a proposed referendum and 550 opposed, and conclude that the true rate of support is likely to be between 42 and 48 percent, and thus that the referendum is destined for defeat. 

It should be obvious just from the above examples that not all inferences are created equal. The first is utterly certain—if the premises are true, the conclusion must inevitably be true. The second is pretty bad—we can accept the premises but think of a large number of plausible alternate explanations. 

The third is in that interesting terrain where we all live. It’s reasonable but hardly certain, and it’d be better if qualified a bit—the 95 percent confidence interval from the Clopper/Pearson binomial procedure applied to the sample is about 6 percent wide—which, in turn, licenses some other inferences (again, too complicated to discuss here).

Inferential strength is a measure of the likelihood of a conclusion, given certain premises. The Socrates example hits the maximum possible inferential strength—its conclusion is certain, given its premises. The alien abduction argument has such low inferential strength we’d typically call it unwarranted or irrational. We hope the sampling inference is strong enough to warrant belief, but its final conclusion (that the referendum will be defeated) is obviously open to uncertainty.

Our primary goal in evaluation should be contextually maximized inferential strength (leading to useful conclusions—it would be self-defeating to maximize inferential strength by only ever concluding trivialities. Again, a digression too long for this space). We are all faced with imperfect data, and we can’t be held responsible for that. What we can and should be held responsible for is presenting conclusions which are as certain as possible, given the information we have

Reconceptualizing our goal in this way suggests several further practices—all simple, but all liable to increase certainty in the face of information loss, corruption, or unavailability.

Conditional Findings, Qualified Antecedents, Inferential Threats

We should present our findings, not in isolation, but as the result of the evidence that led to them. Presenting a finding in the logical form of a conditional—“If X, then Y,” or “Given X, we conclude Y”—makes it clear that the finding does not emerge from the ether fully formed, but is justified—insofar as it IS justified—by a specific set of data. (In context, a conditional statement is a particular logical form; it is neither inherently more nor less certain than a non-conditional. Again, word count limits.)

By the same token, when we’re making inferences, we need to be open about the uncertainty in our antecedents (The “X” part in “If X, then Y”). If you have reason to suspect that your data are imperfect, say so—and then say why you made the methodological decision to do an analysis on the basis of them anyway. If you believe that your survey of the literature is incomplete, say why you think it is still sufficient to support your claims. 

And finally, explicitly present threats to your inference—plausible alternate explanations or scenarios in which your premises could all be true and your conclusion fail to obtain. If your inference is specifically statistical, this may be as simple as strictly defining, in a footnote or appendix, the statistical mechanism you employed. 

Performance evaluators cannot be instantly conversant with all the literature on a topic, nor can they guarantee complete, accurate data. But performance evaluators aren’t primarily researchers or data miners; they’re analysts. The value we bring to the table is inferential, not informational—we organize data into meaningful patterns and demonstrate the weight of evidence behind a conclusion. 

Emphasizing inferential strength in analyses, and thus explicitly conditionalizing findings, qualifying antecedents, and dealing with inferential threats, helps us make true statements in the face of limited information. But it’s also pragmatically useful. It decreases the temptation to unjustified leaps of logic by making them explicit, and it makes our reasoning easier to follow. It’s a conceptual shift worth making.

Do you have a question you would like addressed by an expert? Let us know by sending questions to [email protected]

Fun Fieldwork

This column is a relatively new addition to the newsletter designed to highlight fun or unique fieldwork opportunities performance audit/program evaluation offices sometimes participate in to help understand agency programs and operations. If you have a fieldwork experience your office would like to highlight, please email submissions to [email protected].

Going Undercover to Discover Payday Lending Violations in Louisiana | Karen LeBlanc (Louisiana)

Proper oversight of payday lending is important because payday loans are often used by individuals who are financially vulnerable as they have no other source of income and cannot obtain loans from traditional banking institutions. Payday loans are typically for small-dollar amounts and are due in full by the borrower’s next payday. In some cases, borrowers cannot afford to pay back the full balance by the due date so they end up repeatedly refinancing the loan and racking up additional fees. In Louisiana, a payday lender cannot issue a loan of more than $350 and cannot charge the borrower more than $55 in fees for that loan.

In Louisiana, there are approximately 330 payday lending companies operating 965 locations across Louisiana. These companies self-reported issuing over 3.1 million loans and collecting $145.7 million in fees during calendar year 2013.

In 2013, we issued an audit on the Office of Financial Institution’s (OFI) oversight of payday lenders. During our audit, we performed typical fieldwork steps such as interviewing agency staff, analyzing data, and conducting file reviews, but to truly determine whether certain violations were occurring, we had to go undercover. Two members of the audit team with an affinity for acting posed as “secret shoppers” and attempted to obtain payday loans that violated state law. State law in Louisiana prohibits a payday lender from dividing a loan into multiple loans as this allows the lender to benefit by charging more in fees. OFI defines multiple loans as more than one loan totaling more than $350 to the same person on the same day. Our secret shoppers, posing as borrowers, visited 29 different lenders and found that seven (24 percent) directed them to another one of their locations when we asked to take out more than $350 to obtain an additional loan, which violated state law. 

Going undercover is definitely not for everybody except the most adventurous of auditors. And it certainly is not feasible or even ethical for every audit. For this specific audit, though, it worked well and it enhanced the credibility of our findings. It allowed us to corroborate anecdotal remarks from stakeholders that violations were occurring. It also gave us the strongest form of evidence—physical—and validated what we were seeing in agency files. It also ultimately showed why proper oversight of payday lending is important, and helped us formulate valuable recommendations to improve agency regulatory processes.

See the complete report.

Karen LeBlanc is the director of Performance Audit Services for the Louisiana Legislative Auditor. She can be reached at [email protected].

New Technologies

This column is a relatively new addition to the newsletter featuring technology trends that can be useful in improving our products and productivity.

Washington JLARC Focuses on Continuous Improvement for Online Reports | Rebecca Connolly (Washington)

Washington’s JLARC first began publishing performance audit reports in a web-based format three years ago. Since then, technology investments and process changes have enhanced Washington JLARC’s web-based reports even further.

The initial move to web-based reports was a paradigm shift for the office. The change meant going beyond simply making a PDF version of a typical linear report fashioned around printed pages. Instead, each report is a website in and of itself—with features to navigate across the content and interactive graphics.

“Our research meets the same standards for quality and completeness as it did before the change,” says our Legislative Auditor Keenan Konopaski. “Our writing and use of graphics is now more focused on the needs of our committee, other legislators and legislative staff. They’re busy people. Our job is to tell them what they need to know quickly, clearly and in an accessible format. Legislators read much of their news and information on the Internet, and we felt our mode of communication needed to keep up with that.”

We developed the code for our first web reports in-house, borrowing heavily from other sites for our navigation. We continued to write reports in Microsoft Word and used a macro to create HTML code. Administrative staff edited that code in Dreamweaver to create the reports. It worked, but also led to extra handoffs and longer production time. Analysts could not easily see how the report would look as they worked. The code was limited and restrained us from using some web features, and the overall look did not match the professional, trusted image we wanted to convey. 

Fixing the problems became a high priority. We assigned an analyst to lead it as a project, and she spent about 10 percent of her time on it throughout 2016.

JLARC staff members worked on fixing the writing and production process. At the same time, we developed a requirements document with input from our staff, focus groups of other legislative staff outside of JLARC, and JLARC’s own Executive Committee. Legislative IT staff helped us identify solutions that met the requirements and could be supported on the Legislature’s servers. And we hired a professional web design firm to re-tool the look and feel of the report format.

After a months-long process and testing, we chose a product called oXygen (for more info about the product, see www.oxygenxml.com). Key factors were that it offered track changes similar to Microsoft Word and the ability to see the resulting webpage in real time. It took about two months of IT time to configure the system to meet our needs and implement the design firm’s suggested format. We did all of the training in-house as analysts began to use the system in 2017.

Now, we are enhancing the report template with features such as sortable tables and slideshows. Staff are learning to incorporate Tableau and ArcGIS. It’s technology, though, so we’ll never be done with the changes. Check out our latest reports. And stay tuned as we continue to evolve.

Rebecca Connolly is a research analyst for Washington JLARC. She can be reached at [email protected]

Report Radar | Chris Latta (Pennsylvania)

Good day, fellow seekers and welcome to the winter edition of the Report Radar, your veritable cornucopia of performance evaluation reporting from across the country. Today we’ll be looking at studies from Georgia, Louisiana, Mississippi and South Carolina focusing on education grants, improper travel, state-owned vehicles and The Citadel. 

First past the post is an interesting report from the Georgia Department of Audits and Accounts that studies residential treatment facility (RTF) grant funding provided by the Georgia Department of Education. The report found that school systems have “inconsistently distributed RTFs’ Quality Basic Education funds, either failing to forward all funds allotted." Our brothers and sisters from the Peach State also found that while local school boards are responsible for making sure that kids get an education while in a treatment facility, it is an open question as to whether requiring schools to administer the grants helps achieve the education goal. According to the report, grant applications are not generally reviewed and when they are, applications are only evaluated for grant compliance.

To make matters worse, auditors found that RTFs failed to submit documentation that would allow for grant payments to process in a suitable manner. This resulting in some RTFs receiving their funding for 2017 in November 2016 and others in June 2017. Read the entire report. 

Not to be outdone, the Louisiana Legislative Auditor’s Office released a report on improprieties by their state police. The study found improper travel, use of hotel rooms and use of state police personnel and assets. 

Four Louisiana state troopers thought it would be a good idea to take a circuitous route from the state to San Diego for a conference. And by circuitous, the Auditor’s Office means stopping off at the Grand Canyon, Hoover Dam and Las Vegas, which, you’ll be shocked to learn, are not exactly along the most direct route to San Diego. Three of the four troopers reported, and were paid for, hours they may not have worked. All four incurred additional hotel charges while staying at the tourist destinations.

Apparently the rot starts at the top with the Louisiana State Police Superintendent, Colonel Michael Edmonson. During Mardi Gras, Edmonson obtained extra hotel rooms paid for by the city of New Orleans in his own name and in the name of other troopers. This alone may not be a bad thing except for slightly inconvenient fact that he allowed his friends and family to use the extra rooms free of charge. Additionally, the colonel received hotel reimbursement for his stay in The Big Easy even though the city purchased a room for him at another hotel.

But wait, there’s more… Edmonson, according to the findings of the report, directed troopers to run personal errands for him and his family and friends during work hours. Examples of personal errands include transportation to a Bob Seger Concert, the Golden Nugget Casino and New Orleans Mardi Gras events. The colonel’s generous use of taxpayer dollars also included dispatching troopers to deliver trays of food from the State Police cafeteria to the private residence of his friend when the friend’s father passed away. Read the entire report. 

The Joint Legislative Committee on Performance Evaluation and Expenditure Review (PEER) of Mississippi produced a study on the “Management of Mississippi’s State-Owned Vehicles.” The report found that Mississippi’s vehicle management system does not currently maintain complete and reliable data such as the number of state-owned vehicles, vehicle mileage and maintenance costs. The lack of information inhibits the Legislature’s ability to make data-driven appropriation decisions. The report also found that state agencies have not input accurate vehicle information in the data system because employees find the system to be complicated and cumbersome. Additionally turnover in state agencies has impeded the implementation of their system. In an effort to mitigate the problem, the Bureau of Fleet Management staff have provided training on the fleet management system. However those attempts have yet to yield noteworthy improvement in data quality. Read the entire report.  

Finally, we turn to South Carolina and the Legislative Audit Council. There, the members of the South Carolina General Assembly requested the council to conduct an audit of the cadet discipline process at The Citadel and the college’s hiring practices. 

The report found that the cadet disciplinary appeal process was cumbersome and unnecessary since the president of the college makes the final decision regardless of the recommendations of various appeals panels and boards. The council also found that employees lacking the proper qualifications were hired for various positions at the college. For example, two employees whose positions required a master’s degree did not have one. Interestingly, The Citadel, a military college, does not require written documentation to show whether employees who served in the military submit any military paperwork to show they are eligible to hold their positions with the college. Read the entire report. 

There you have it—our highlighted reports for the winter of 2017. If there’s a report you feel merits inclusion in the next edition of Report Radar, please feel free to send it my way.

Chris Latta is a project manager with Pennsylvania’s Legislative Budget and Finance Committee. He can be reached at [email protected]. 

Research and Methodology

Using REMI's Tax-PI to Study Economic Development Tax Incentives | Eric Whitaker (Washington)

Since 2014, Washington’s Joint Legislative Audit and Review Committee (JLARC) has used REMI’s Tax-PI to model the complex economic interactions associated with select state tax preferences. We have used it to help answer questions such as:

  • What economic impacts can we expect from aerospace industry tax preferences?
  • What are the likely economic impacts from a tax preference for motion picture production projects such as film, serial programming, and commercials?
  • How do these estimates vary when accounting for changes in government expenditures funded by the tax?
Helping Legislators Understand Policy Outcomes

This type of analysis does not establish causality. But it can provide valuable information for legislators about expected outcomes for a variety of economic, social, and demographic variables. Legislators have been receptive to our analyses using REMI and appreciate that we can evaluate many possible scenarios.

What is Tax-PI and why do we use it?

Tax-PI is a macroeconomic impact model that incorporates aspects of four major economic modeling approaches: input-output, general equilibrium, econometrics and new economic geography. The software’s main purpose is to estimate the economic and fiscal effects and the demographic impacts of tax policy change. 

Washington statute directs JLARC staff to evaluate the trade-off between the economic activities associated with a tax preference and those related to government spending equivalent to the direct revenue reduction. We chose Tax-PI over competing software because it has the ability to model the public sector as an endogenous component of the state economy.

In addition, the software is flexible and the historical data comes from federal agencies such as the U.S. Census Bureau and the Bureaus of Economic Analysis and Labor Statistics. The underlying structure can be modified to accommodate conditions unique to individual states or regions. The model includes various features that make it particularly useful for our work:

  • REMI built a customized statewide model to reflect Washington’s economy.
  • Our model contains 160 NAICS-based industry sectors and provides the functionality to build custom industries.
  • Users construct and calibrate a budget to account for their state’s revenue and expenditure forecasts.
  • The model can forecast economic and revenue impacts over multiple years. 
Case Study: Using Tax-PI to evaluate aerospace tax preferences

Our first experience using Tax-PI was in 2014, as part of our review of Washington’s aerospace tax preferences. The Legislature passed the package in 2003 to help secure the manufacturing work for Boeing’s 787 production line. Specifically, we simulated distinct scenarios that were presented to policymakers during the legislative process. In one scenario, we assumed the tax package did not pass and that Boeing moved 80 percent of their in-state workforce to other states. Other scenarios assumed the tax package passed and new employees were hired or that employees were shifted from other production lines. In these two, we also reduced government spending equivalent to the value of the tax package.

Tax-PI produced a year-by-year estimate of total statewide economic activity accounting for the direct, indirect, and induced effects associated with each scenario. 

  • Direct effects are industry specific and capture how a target industry responds to a particular policy change.
  • Indirect effects capture employment and spending decisions by businesses in the targeted industry’s supply chain that provide goods and services.
  • Induced effects capture the spending and consumption habits of employees in targeted and related industries.

The estimates suggested that public sector employment would likely absorb significant reductions under all scenarios. Aerospace industry employment increased in two of the three scenarios. However, total statewide employment would gain only if the industry had a significant hiring boom. This result is consistent with aerospace multipliers we found in other studies.

Additional details and analyses are available in our published aerospace tax preference review.

Case Study: Using Tax-PI to evaluate motion picture tax preferences

In 2015, we used Tax-PI to simulate the economic activity associated with Washington’s Motion Picture Competitiveness Program. The MPCP reimburses select projects for a portion of their qualifying in-state spending. Once the project is complete, beneficiaries must submit documentation and receipts, which are verified by the program administrator before reimbursement.

JLARC staff used this detail to build scenarios that captured the direct industry-level purchases of goods and services and employment associated with each project. Lacking information on how much of the activity would have occurred in Washington but for the preference, we ran one scenario assuming that all of the activity was due to the tax preference and then systematically reduced that assumption in subsequent iterations (e.g., 100 percent, 75 percent, 50 percent, etc.). We used this break-even analysis to identify the level of new economic activity needed to offset the lost activity due to reduced state spending. The graph summarizes the employment estimates and our published report provides further detail and includes similar estimates for other outcomes. 

In addition to the direct, indirect, and induced economic effects, Tax-PI also estimates tax revenue associated with the simulated activity. We found that the return on investment was approximately $0.06 per dollar of foregone revenue and that this number was consistent with work conducted by other independent analysts. 

Concluding Thoughts

Agencies that are evaluating whether Tax-PI is a good tool for their work should consider the following:

  • The software requires a significant financial investment for an annual license. We currently partner with another state agency and pay $47,000 for our annual license.
  • REMI does not currently offer a user’s manual, so training and learning are also a significant investment. We have found REMI’s customer support to be responsive and knowledgeable.
  • Having more than one trained user on staff is helpful. We have been fortunate to have three licensed users in our office. This permits staff to collaborate on projects.
  • Outside expertise is available and valuable. We have invited outside voices and advisors with content expertise onto specific projects to truth test our assumptions, provide suggestions about modeling choices, and help interpret our output.

For Washington JLARC, Tax-PI has been a valuable tool for analyzing specific tax preferences and helping legislators understand the potential effects of policy decisions. 

Eric Whitaker, PhD., works tax reviews for Washington JLARC. He can be reached at [email protected]

news & snippets

NLPES Bylaws Officially Repealed and Re-enacted!

With 137 "yes" votes and 0 "no" votes, the NLPES membership has officially repealed and reenacted the NLPES bylaws! The newly adopted bylaws are on the NLPES webpage. 

This was an important effort to provide a solid foundation for NLPES governance now and into the future. We would like to thank all of the members who provided input throughout this process and participated in the vote. We would also like to thank our NCSL liaison,Brenda Erickson, for facilitating the voting process. Brenda reports that we had a total of 137 votes cast from 15 states. 

A special thanks to Greg Fugate, NLPES immediate past chair and Elections Subcommittee chair for spearheading this effort!

#Winning

With the 2018 awards season around the corner, now’s the time to think about which award(s) your office will contend for! As a reminder, your offices may submit applications or nominations for awards in four categories.

  • Certificates of Impact: Awarded to offices that released reports documenting public policy impacts within their respective states.
  • Excellence in Research Methods: Awarded to offices that have produced a report developed through the use of exemplary research methods. 
  • Excellence in Evaluation: Awarded to an office that has made significant contributions to the field of legislative program evaluation during a four-year period.
  • In addition, we will be seeking nominations for the Outstanding Achievement Award, presented to an individual or individuals who have made outstanding contributions to the field of legislative program evaluation. Please note—nominees do not have to be retirees.
We Need Judges!!!

If you are interested in judging one of the award categories, please contact a member of the Awards Subcommittee. No prior judging experience is required.

Visit the NLPES Awards Program page for more information about award categories. Also, be on the lookout for more details in Spring 2018!

Awards Subcommittee: Shunti Taylor ([email protected]) and Melinda Hamilton ([email protected]). 

Website Update

The NLPES website homepage is looking good with pictures of your colleagues scrolling across the center of the page accompanied by their thoughts on the value of NLPES to them and their work. Take a look and while you are there, review the Professional Development Resources materials, the opportunities for recognition for your work, and use the listserv to communicate quickly with your colleagues around the country. Finally, check out the new “Need assistance? Have advice?” bar that lets you post questions or suggestions for NLPES.

Staff Happenings

Pennsylvania

Philip R. Durgin retired at the end of 2017 after 36 years with the Pennsylvania Legislative Budget and Finance Committee. He had served as executive director since 1988. During his tenure as executive director, the office completed and released over 300 reports to the Pennsylvania General Assembly on topics touching on all areas of Commonwealth government operations—such as transportation, health and welfare, community and economic development, and conservation and environmental protection. Several of these reports received recognition from the National Legislative Program Evaluation Society, and many of the recommendations in these reports have been implemented to save taxpayer funds and enhance Commonwealth agency and program operations. Phil also served as a member of the Executive Committee of the National Legislative Program Evaluation Society and hosted the annual professional development seminar. Hail and farewell!

Texas

We see quite a few comings and goings in the NLPES newsletter. I can't recall seeing a notice, however, about the administrative staff that keep our offices running, assist the public, prepare our reports for publication and the many other duties we could not do without. With that in mind, I am honoring the retirements of three of the Texas Sunset Commission's long-serving administrative staff. Altogether, Sunset lost 95 years of collective experience and 60 percent of our five-member administrative staff.

Cindy Womack was one of the first Sunset employees when the Commission was created in 1977. She moved us from typewriters to word processing to publishing software. She also solely handled the entire business operation for many years. Without Cindy, we would not have gotten paid, which would have made us quite unhappy. She has been such an essential part of our organization; it’s hard to imagine Sunset without her.

Dawn Roberson started answering our phones 32 years ago and quickly became an integral part of our administrative team, taking on report production and review team support. As the agency’s business functions expanded, Dawn also took on many of the business operations. All of you know how important handling travel arrangements and processing those vouchers is. With Dawn at the helm, we were reimbursed within days, making everyone happy.

Cee Hartley has been with Sunset almost 20 years. Starting out in report production, she was soon promoted to office manager. Cee ensured that our reports, and pretty much everything that came out of our office, met her high standards, which meant being perfect or as close as possible to perfect every time. This also is how she ran the office, ensuring everyone—staff, legislators, and the public alike—were treated with kindness and respect. She has been a huge part of the positive public image of Sunset.

All three of these women made Sunset and Texas a much better place. We thank them for their service!

Please let us know if you have staff happenings to share! E-mail [email protected]

Stop the Presses! Performance Reports in the News | Dale Carlson (California)

Out with the old and in with the new! Happy New Year all!

If the last half of 2017 is any indication, our offices will continue to be bu-sy, bu-sy, bu-sy during 2018. Check out below some of the media attention from roughly May 2017 through December 2017 earned by reports our member offices issued. 

ARKANSAS: Division of Legislative Audit

CALIFORNIA: State Auditor

COLORADO: Office of the State Auditor

DISTRICT OF COLUMBIA: Office of the D.C. Auditor

GEORGIA: Department of Audits and Accounts, Performance Audit Division

HAWAII: Office of the Auditor

ILLINOIS: Office of the Auditor General

KANSAS: Legislative Division of Post Audit

LOUISIANA: Legislative Auditor

MAINE: Office of Program Evaluation and Government Accountability

MICHIGAN: Office of the Auditor General

MINNESOTA: Office of the Legislative Auditor

MISSISSIPPI: Legislative PEER Committee

MONTANA: Legislative Audit Division

NEBRASKA: Legislative Audit Office, Performance Audit Section

NEW HAMPSHIRE: Legislative Budget Assistant Office, Audit Division

NEW JERSEY: Office of the State Auditor

NEW MEXICO: Legislative Finance Committee

NORTH CAROLINA: Program Evaluation Division

OREGON: Audits Division, Secretary of State

PENNSYLVANIA: Legislative Budget & Finance Committee

SOUTH CAROLINA: Legislative Audit Council

SOUTH DAKOTA: Department of Legislative Audit

TENNESSEE: Division of State Auditor

TEXAS: State Auditor’s Office

UTAH: Office of the Legislative Auditor General

VIRGINIA: Joint Legislative Audit and Review Commission

WEST VIRGINIA: Post Audit Division

WEST VIRGINIA: Performance Evaluation and Research Division

WISCONSIN: Legislative Audit Bureau

Share your coverage with us! If you would like us to highlight media attention about your reports in our next newsletter, send the hyperlinks to [email protected]

Websites, Professional Development and Other Resources

NLPES listserv: The NLPES listserv is an email discussion group for NLPES members. By sending a message to [email protected], you can reach all listserv subscribers simultaneously. Listserv members can query other states about evaluation work similar to their own projects, receive announcements about performance evaluation reports and job opportunities from other states, and are notified when the latest edition of this newsletter is available! To join the listserv, send an email to Brenda Erickson, NCSL liaison to NLPES, with the subject “SUBSCRIBE to NLPES Listserv.” Include your name, job title, audit agency/organization name, mailing address (including city, state, zip code), phone number and email address. A “Welcome” message will be sent to you once you are successfully added to the listserv. See the listserv link on the NLPES website for additional information on how to post messages and “netiquette” niceties, like not hitting “Reply All” when answering a query posted to the listserv.

Are you receiving our listserv emails? Some states’ systems block NLPES listserv emails. If you think you are not receiving our emails, please check your state’s security system and spam filters, and/or contact Brenda Erickson.

Legislative careers website: Know someone thinking about pursuing a career with a state legislature? Point them to the opportunities posted on  NCSL’s legislative careers website. Job seekers can explore the types of work legislative staffers perform, including performance evaluation, budgeting, fiscal analysis, legal and policy research and opinions, bill drafting, public relations, librarians, building security, and information technology support. Opportunities are posted by states offering positions under Legislative Jobs. Attracting young people to work as legislative staff will be increasingly important in the coming years. And even though baby boomers make up about a third of the national workforce, nearly half of legislative staff responding to a survey were 50 years old or older. Replacing those staffers will present challenges. Check out the welcome video, “A Day at the Capitol,” and learn more about the opportunities and rewards of working for state legislatures. Watch the videos under Career Paths to hear from staffers in different states describing their jobs and the satisfaction they gain from a public service career.

Visit our NLPES online training library for a variety of refresher and training materials! There are nearly two dozen resources on planning and scoping, fieldwork, writing and publication, management. Most are PowerPoint slides; some are narrated; a few are webinars or podcasts. Check them out.

Ask GAO LiveAskGAOLive is a 30-minute interface where GAO staff chat about a specific report and research, and answer questions that are emailed or tweeted in. Sessions are recorded and archived on the website. You can also “follow” GAOLive to receive advance notice of chat sessions. Topics include veterans and higher education, prescription drug shortages, prison overcrowding, state and local fiscal outlook, and government contracting.

Ensuring the Public Trust What’s the most common internal performance measure for evaluation shops? How many offices tweet? What percentage of staff has fewer than 10 years of experience? How can you contact a sister office in another state? Ensuring the Public Trust, summarizes information about legislative offices conducting program evaluations, policy analyses, and performance audits across the country.

Loading
  • Contact NCSL

  • For more information on this topic, use this form to reach NCSL staff.