new-cyber-guidance-suggests-steps-to-foil-russian-intel-threats

New Cyber Guidance Suggests Steps to Foil Russian Intel Threats

The National Security Agency announced that U.S. and U.K. authorities have released a new joint cybersecurity guidance recommending measures for network defenders to address ongoing cyber threats from the Russian Federation Foreign Intelligence Service, or SVR. 

The joint advisory listed the common vulnerabilities and exposures that SVR is exploiting through various malicious tactics, such as spearphishing, password spraying, malware deployment, cloud exploitation and living off the land, or LOTL, attacks, the NSA said Thursday.

The new eight-page joint cybersecurity advisory, titled  “Update on SVR Cyber Operations and Vulnerability Exploitation,” is co-authored by the NSA, the FBI, the U.S. Cyber Command’s Cyber National Mission Force and the U.K.’s National Cyber Security Centre, or NCSC.  

To reduce the potential SVR attack surface, the advisory suggests disabling unnecessary internet-accessible services, restricting access to trusted networks and removal of unused applications in workstations. 

Other advisory suggestions include multi-factor user authentication and regular audits of cloud-based accounts and applications 

Additional mitigation measures on Russian exploitation of cloud environments are contained in another joint cybersecurity advisory issued in February. The earlier guidance was spearheaded by the U.K.’s NCSC and supported by international partners including U.S., Canadian, Australian and New Zealand security agencies.

ussf-says-boeing-built-x-37b-to-perform-aerobraking-maneuvers

USSF Says Boeing-Built X-37B to Perform Aerobraking Maneuvers

The U.S. Space Force and Boeing will work together to enable the X-37B Orbital Test Vehicle to perform a series of aerobraking maneuvers to alter its orbit around Earth while using minimal fuel.

The Space Force said Thursday the Boeing-built X-37B spacecraft will execute a series of passes using the drag of Earth’s atmosphere to change orbits and safely dispose of its service module in compliance with space debris mitigation standards.

This novel and efficient series of maneuvers demonstrates the Space Force’s commitment to achieving groundbreaking innovation as it conducts national security missions in space,” said Secretary of the Air Force Frank Kendall.

Once aerobraking is complete, the spacecraft will resume efforts to meet its test and experimentation objectives.

“This first of a kind maneuver from the X-37B is an incredibly important milestone for the United States Space Force as we seek to expand our aptitude and ability to perform in this challenging domain,” said Chief of Space Operations Gen. Chance Saltzman.

Kendall and Saltzman are both 2024 Wash100 awardees.

young-bang:-army-eyeing-faster-acquisition-pathway-for-ai

Young Bang: Army Eyeing Faster Acquisition Pathway for AI

Young Bang, principal deputy assistant secretary of the Army for acquisition, logistics and technology, said the military branch is considering developing a separate path or a sub-path within the software acquisition pathway for artificial intelligence to accelerate the development and deployment of AI tools to warfighters, Federal News Network reported Thursday.

Let’s figure it out. Let’s be creative. Let’s put things together. Let’s put a [Middle Tier of Acquisition pathway] with a software pathway and work with [the Office of the Secretary of Defense] to get to something faster for AI because as fast as the software pathway is, we need a faster path for algorithms,” said Bang.

According to FNN, the current software pathway requires programs to achieve a minimum viable capability release, or MVCR, within a year.

With the MVCR, the initial version of a functional capability is handed over to service personnel.

What we’re saying is there’s great utility in a software pathway, but if we use a software pathway for algorithms — overnight is a good example, but some of these can actually take a little bit longer, but still a week. And if we think about that, a week versus an MVCR in a year — the timelines don’t align,” the 2024 Wash100 awardee noted.

how-to-get-more-value-out-of-your-2024-health-benefits

How to get more value out of your 2024 health benefits

This content is provided by Blue Cross Blue Shield.

Open Season for federal employees starts Nov. 11, 2024, which makes now the ideal time to see how you can make the most of your current health benefits throughout the remainder of the year.

Since 1960, the Blue Cross and Blue Shield Federal Employee Program (FEP) has been proud to provide quality coverage to federal employees, retirees and their families. Our members know they can count on our nationwide network with over 2 million doctors and hospitals along with free preventive care and comprehensive prescription drug benefits to help keep them healthy.

But we also offer a wide range of tools and programs that help make the most of their health care coverage this year—and beyond.

Unlock more with MyBlue®

MyBlue, our member-only website, is the key. After creating an account, FEP members get 24/7 access to health and wellness resources that help them get more out of their coverage.

Our enhanced provider directory lets them find in-network providers, plus get helpful procedure cost estimates so they know how much a service may cost. There’s also the Financial Dashboard that lets members securely review their Explanation of Benefits (EOBs) and estimated out-of-pocket costs for the year.

Start earning wellness incentives

MyBlue is also the gateway to earning incentives and rewards. Through our wellness programs, members can get rewarded when they take steps to improve their health or manage serious conditions.

With the Blue Health Assessment (BHA), members can address health risks before they become issues. After answering a simple questionnaire about their health, they’ll receive a personalized score and action plan with realistic steps they can take to improve their health. Plus, eligible members earn $50 the first time they complete the BHA every year.

Members can then participate in our online coaching tool, Daily Habits. This program allows them to complete activities related to their well-being and earn rewards. Eligible members can earn up to $120 for completing three activities, including those related to losing weight, exercising, managing stress or managing conditions such as heart disease, high blood pressure or COPD.

FEP Blue Focus® members can even earn $150 in wellness incentives just for getting their annual physical.

For members diagnosed with high blood pressure, they can get a blood pressure monitor, at no cost, every two years. This monitor can help them easily track their blood pressure numbers at home.

More ways members can save

Through the Blue365® discount program, members get access to exclusive deals from over 100 national retailers and brands, including Fitbit, Philips Norelco, Reebok, Sun Basket, TruHearing and many more.

We know how important it is for retired members to stay on budget. That’s why FEP Blue Basic™ members enrolled in Medicare Part A and Part B can get up to $800 back for paying their Part B premiums.

Plus, with our Prescription Drug Cost Tool, members can see if a drug is covered under their plan and find the lowest price on medications in their area.

A plan that’s always by their side

At FEP, we’re dedicated to our members’ health and well-being. With Open Season fast approaching, we encourage you to see what the Benefit of Blue® can do for you.

Open Season is November 11 – December 9, 2024.

Learn more here

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

opm-sets-up-leave-transfer-program-for-feds-impacted-by-milton

OPM sets up leave transfer program for feds impacted by Milton

For the second time in as many weeks, the Office of Personnel Management has announced that it will establish a temporary leave-sharing program to help employees who need time off from work to recover following Hurricane Milton’s landfall in Florida Wednesday.

After a brief dalliance with Category 5 winds, Milton struck as a Category 3 storm near Tampa, creating tornadoes and other storm damage as it crossed the state before entering the Atlantic Ocean, killing at least 18 Americans as of Friday.

In a memo to agency heads Thursday, Acting OPM Director Rob Shriver announced that, as the agency did in connection with Hurricane Helene, which inundated several states across the southeast last month, OPM will establish an emergency leave transfer program for federal workers in Florida. Such programs allow federal employees to donate unused paid leave so that colleagues who need to take time off to recover from a natural disaster can do so without dipping into their own paid or unpaid leave.

“An ELTP permits employees in the executive and judicial branches . . . to donate unused annual leave for transfer to employees of the same or other agencies who are adversely affected by a major disaster or emergency, either directly or through adversely affected family members, and who need additional time off from work without having to use their own paid leave,” Shriver wrote. “Employees who are adversely affected and seek to become emergency leave recipients must apply in writing to their agencies.”

Although OPM authorizes emergency leave transfer programs, it is up to individual agencies to measure their employees’ need for donated leave and, if necessary, stand up a leave bank for colleagues to donate. If not enough leave is available within the agency’s leave bank to cover all requests, OPM then will step in to coordinate leave donations between agencies.

Shriver also reminded agency heads to refer to 2017 OPM guidance, issued after a similar flurry of severe storms and wildfires, loosening some restrictions on emergency leave transfer programs and providing tips to agencies tasked with maintaining multiple leave banks at once. Though agencies must maintain distinct leave transfer programs—as each is tied to a specific disaster or emergency—employees may elect to donate to multiple leave banks at once, and they also may redesignate already donated leave for use by victims of another storm, provided that leave hasn’t already been allocated.

“Agencies should contact OPM for assistance in receiving additional donated annual leave from other agencies only if they do not receive sufficient amounts of donated leave to meet the needs of emergency leave recipients within the agency,” Shriver wrote. “Based on the demand for donated leave, OPM will solicit and coordinate the transfer of donated annual leave among federal agencies.”

hhs-to-crack-down-on-providers-blocking-access-to-electronic-medical-records

HHS to crack down on providers blocking access to electronic medical records

The Health and Human Services Department is getting serious about taking on medical providers and organizations engaged in information blocking practices that limit access to electronic health record data, according to a top official with the agency.

In a Tuesday blog post, Micky Tripathi — HHS assistant secretary for technology policy, national coordinator for health information technology and acting chief artificial intelligence officer — said the department is acutely aware that some bad actors are skirting information sharing requirements mandated by federal law. 

Tripathi wrote that HHS is “highly concerned about ongoing and recent reports that we have received about potential violations of both the letter and spirit of the various laws and regulations now in place to ensure information-sharing to improve our healthcare system and enhance the lives of all Americans.”

The 21st Century Cures Act, signed into law by President Barack Obama in December 2016, required, in part, that EHR systems be configured in such a way that patient information can be “accessed, exchanged and used without special effort through the use of application programming interfaces,” or APIs. There are eight specific exceptions to this requirement.

The law also prohibited information blocking, allowed patients to access their electronic health information and directed the HHS Office of the National Coordinator for Health Information Technology, or ONC, to create a process for the general public to report possible instances of information blocking.

The blog post came after HHS announced in July that it was reorganizing its internal offices and would be renaming ONC as the Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology. Tripathi was named the head of the joint office. 

Between April 5, 2021 and September 30, 2024, HHS reported that it received 1,095 information blocking claims through its submissions portal. The majority of these claims were filed by patients. 

Information blocking, however, has been a particular issue for clinicians trying to access records from other providers’ EHR systems. 

Tripathi said that, despite the 21st Century Cures Act’s specific wording that EHR and API technologies be accessible without “special effort,” some providers and organizations are still making it difficult for stored data to be interoperable or easily shareable across different systems. 

“What is abundantly clear is that it is behavior, rather than technology, that is far and away the biggest impediment to progress,” he wrote.

Some of the examples of information blocking reported to HHS included instances of API users —  typically physicians or healthcare providers — being prevented from connecting to EHR systems, API access being conditioned on fees and contractual terms prohibited by law, and providers or API developers “not providing written and timely responses to denials for access to electronic health information as required by regulation.”

The ONC created a voluntary Health IT Certification Program in 2010 to ensure that health information systems comply with HHS functionality, security and privacy standards. One of the program’s certification provisions includes promoting and supporting interoperability between systems. Tripathi noted in his blog post that over 96% of hospitals and approximately 78% of physician offices use EHR systems that have been certified through the program.

Tripathi said violating the conditions and maintenance of certification requirements by engaging in information blocking also violates the terms of the program. He added that HHS will continue a “direct review” of certified API developers and health IT systems to assess their compliance with the directive. 

“Certified health IT developers with identified non-conformities in their business practices or certified health IT could face suspension or termination of the affected certification(s),” he wrote. “Termination of certification of one or more of a developer’s health IT modules carries the added consequence of the developer being banned from the certification program.”

Moving forward, Tripathi said the department will also be upping its engagement with API users, working with certification bodies and “engaging” HHS’s Office of the Inspector General to address information blocking concerns. 

Beyond offering new educational resources and improvement feedback channels, Tripathi said his office will also expand its monitoring efforts. 

“[The Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology] will strengthen oversight and enforcement by implementing a more rigorous review process for API documentation, both at the initial certification stage and throughout ongoing certification maintenance,” he wrote. 

In a Tuesday post on X, Tripathi linked to his blog post and warned “we can do this the easy way….or the hard way.”

pentagon-releases-final-cmmc-rule,-paving-way-for-implementation

Pentagon releases final CMMC rule, paving way for implementation

The Defense Department released the final rule for the long-awaited Cybersecurity Maturity Model Certification program today, further paving the way for CMMC requirements to show up in contracts starting next year.

The final CMMC program rule was released for public inspection today. It’s expected to officially publish in the Federal Register on Tuesday, Oct. 15.

The rule establishes the mechanisms for the CMMC program. The goal of CMMC is to verify whether defense contractors are following cybersecurity requirements for protecting critical defense information. Many contractors will be required to receive a third-party audit under the program, a significant departure from the current regime of relying on self-attestation.

DoD released the proposed CMMC program rule last December. The department received 787 comments on the rule before the public submission period closed in February.

“The department would like to thank all the businesses and industry associations that provided input during the public comment period,” DoD said in a statement released today. “Without this collaboration, it would not have been possible to meet our goals of improving security of critical information and increasing compliance with cybersecurity requirements while simultaneously making it easier for small and medium-sized businesses to meet their contractual obligations.”

The final rule released today establishes the CMMC program and processes into law. Separately, the Pentagon published a proposed CMMC acquisition rule this past summer. The comment period on the proposed acquisition rule closes Oct. 14.

In its statement today, DoD said the final acquisition rule will be published in “early to mid-2025.”

“Once that rule is effective, DoD will include CMMC requirements in solicitations and contracts,” DoD added.

CMMC a ‘glacial effort’

The Pentagon has been developing the CMMC requirements for more than five years. DoD began developing the rules due to concerns that many companies were not following contractual cybersecurity requirements, allowing U.S. adversaries to steal sensitive but unclassified data from their networks.

After significant industry push back due to the expected costs and impacts of the original program, however, DoD revised the program into the so-called “CMMC 2.0” in 2022.

During an appearance at the Professional Services Council’s annual defense conference on Tuesday, Deputy DoD Chief Information Officer Dave McKeown acknowledged how long it’s taken for CMMC to come to fruition.

“We’re nearing the end for sure – it has been a glacial effort,” McKeown said. “It has taken a long time, and it’s taken a lot of perseverance to work through getting the rule right and getting it approved, but we are definitely nearing the end, and it is imminent that this will be released, and everybody will have this in their contracts going forward.”

DoD will eventually scale the CMMC requirements across all applicable contracts. But in its proposed acquisition rule, the Pentagon laid out plans for a three-year-long “phased rollout” of the requirements. During that time, DoD program managers would have the discretion to include CMMC in contracts.

Three levels of CMMC

The final rule establishes three distinct “levels” of CMMC, as first envisioned under the revised program.

The CMMC requirements align with existing acquisition rules that require contractors to implement cybersecurity controls in National Institute of Standard and Technology (NIST) special publication 800-171 for protecting controlled unclassified information.

Under level one, contractors that handle less sensitive “federal contract information” will be able to submit a self-assessment of their compliance.

Under CMMC level two requirements, contractors that are generally required to protect “controlled unclassified information,” or CUI, may be required to obtain a third-party assessment. Those auditors will be authorized by the Cyber Accreditation Body, a nonprofit that holds a contract with DoD.

Meanwhile, DoD says some CUI will require a “a higher level of protection against risk from advanced persistent threats.” Contractors that handle that type of information will be required to get an assessment led by the Defense Industrial Base Cybersecurity Assessment Center as part of CMMC level three requirements. The level three requirements include additional cybersecurity controls laid out in NIST Special Publication 800-172.

“CMMC provides the tools to hold accountable entities or individuals that put U.S. information or systems at risk by knowingly misrepresenting their cybersecurity practices or protocols, or knowingly violating obligations to monitor and report cybersecurity incidents and breaches,” DoD said in its statement today.

The rule also allows DoD program offices to grant “Plans of Action and Milestones” for contractors that don’t fully comply with the NIST requirements. DoD says POA&Ms will be granted for “specific requirements as outlined in the rule to allow a business to obtain conditional certification for 180 days while working to meet the NIST standards.”

DoD in its statement released today encouraged companies in the defense industrial base to “take action to gauge their compliance with existing security requirements and preparedness to comply with CMMC assessments.”

Cloud plans

With companies and small business advocates raising concerns about the cost and complexities of CMMC, the Pentagon is also pointing businesses to cloud offerings and other managed services that could be used to meet the requirements.

Meanwhile, McKeown said DoD is partnering with large cloud service providers and managed service providers to establish a certification program that could meet “all or most” of CMMC requirements.

“There will probably be roles and responsibilities outlined between what the cloud service provider will do or the managed service provider will do, and the customer will have to do, but it will make it streamlined,” McKeown said. “Much like FedRAMP, we’ll say that this has our seal of approval that it is CMMC compliant, and then partners can start doing their work out of these environments and not have to uplift their whole entire home network in order to meet the requirements that’s going along very well.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

omb-regulation-sets-standards-for-‘trustworthy’-government-statistics

OMB regulation sets standards for ‘trustworthy’ government statistics

More than a dozen federal statistical agencies produce data sets that drive policy decisions in government and business decisions across the economy.

The Office of Management and Budget, underscoring the value of that data, is setting a standard for trustworthy government statistics.

OMB, in its “trust regulation,” published in the Federal Register on Friday, is taking steps to ensure statistical agencies produce “accurate, objective and trustworthy information.”

Chief Statistician of the United States Karen Orvis wrote in a blog post Thursday that the final regulation ensures statistical agencies “remain safe places for the collection, maintenance, and sharing of information critical to government decision making,” while protecting the privacy of individuals and organizations.

“Federal statistics are produced as a public good, whose value is rooted in public trust.  Maintaining and bolstering public trust in our nation’s statistics is absolutely critical,” Orvis wrote.

The regulation, required under the Foundations for Evidence-Based Policymaking Act, comes a few months after the American Statistical Association published a report warning that statistical agencies are having a harder time producing quality data.

In addition to budget and staffing shortages, the report found declining trust in the federal government corresponds with lower response rates to statistical surveys.

The report also warns federal statistical agencies lack “professional autonomy” from their parent agencies, and that they remain vulnerable to political meddling and improper influence.

The Trump administration, for example, pushed for adding a citizenship question to the 2020 census. The Supreme Court, however, blocked the Census Bureau from adding the question to decennial count forms.

The administration also pressured the Census Bureau to produce a report on the number of undocumented people in the U.S.

Former Chief Statistician of the U.S. Nancy Potok said the regulation spells out the “respective roles and responsibilities of not only the statistical agencies themselves, but the parent agencies in which they reside.”

“Finding the balance between professional autonomy for the statistical agencies to produce objective, trustworthy statistics and still serve the policy objectives of the president and executive branch political appointees has been a decades-long struggle,” Potok said.

“This is a big step forward in articulating what that balance should look like. Now we have to wait and see if there are mechanisms to enforce the regulation,” she added.

The final regulation states the federal statistical system “continues to provide the gold standard for impartial, trusted federal statistics foundational to informing decisions across the public and private sectors.”

The federal government has 16 federal statistical agencies, which have anywhere from 10 to 7,000 full-time employees.

“Increasingly, collaboration is required across the Federal statistical system to unlock greater efficiencies and leverage diverse expertise,” the regulation states.

The Data Foundation President Nick Hart said in a statement Friday that the rule “marks an important milestone in implementing the Evidence Act.”

“The success of this rule will depend on meaningful collaboration across the federal data ecosystem, far beyond recognized statistical agencies and units,” Hart said. “The Data Foundation urges statistical agencies to work closely with chief data officers, evaluation officers, performance officers, chief information officers, chief financial officers, and other key data leaders to ensure this rule supports a comprehensive, government-wide approach to evidence-building activities and responsible data use,” the foundation wrote.

Hart told Federal News Network in an interview that OMB’s regulation recognizes statistical agencies are no longer just producing discrete figures — such as gross domestic product or unemployment rates — but also coming up with objective measures of how government programs are operating.

“That’s really the broad intent of the Evidence Act, to not just understand how our government works, but help it work better,” Hart said.

Hart said OMB’s trust regulation also sets a standard for statistical agencies ahead of the president election in November and in anticipation of a new administration.

“Regardless of who wins that election, this will be a regulation that goes forward into the next administration, and into the executive branch going forward,” Hart said.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

dcsa-to-release-implementation-plan-for-background-investigation-system

DCSA to release implementation plan for background investigation system

The Defense Counterintelligence and Security Agency (DCSA) is set to publish an implementation strategy for its National Background Investigation Services (NBIS) program, a crucial step in getting the long-delayed initiative back on track. 

Once fully implemented, NBIS will serve as a “one-stop shop” background investigation system, offering security clearance applications, case management tools, and continuous vetting data, among other features. The IT system is critical to implementing the federal government’s “Trusted Workforce 2.0 initiative, a set of the largest reforms to personnel vetting processes, including security clearances, public trust and credentialing, and background investigations.

The NBIS program, however, has faced cost and performance issues — the Defense Department indicated earlier this year NBIS was not on track to meet key milestones.

“We all know that we’ve had some delays at DCSA with NBIS, but we have a new timeline that’s with [the Office of the Director of National Intelligence] right now under review. We’re all in — this is something that we view as can’t fail, and the timeline is the best plan that I’ve seen so far. I think it’s rooted in a lot of solid thought and partnership across the stakeholder community, Jonathan Maffet, the executive program manager at DCSA said during the Professional Services Council’s Defense Conference Tuesday.

The second phase of the Trusted Workforce 2.0 calls for streamlining all of the security clearance policies that have been put in place over the course of the past 70 years. The initiative also instructs a shift from a periodic reinvestigation model for security clearances to a continuous vetting model.

Matthew Eanes, the director of the Performance Accountability Council program management office, said the transition has pushed legacy systems to their limits.

“There’s three phases of the Trusted Workforce. Phase Two was to do the policies, but it was also to implement what we called transitionary states. We essentially pushed as far as we could push with the legacy capabilities, and stretched them to the seams, then found some duct tape and stretched them a little bit further. And where we found ourselves, we were out of duct tape. The remaining implementation of Trusted Workforce is largely dependent on the critical path for NBIS,Eanes said.

Eanes said the new implementation strategy, along with all the milestones associated with the strategy, will be published this week on performance.gov. 

There is also a push to implement a set of end-to-end shared services — the Office of the Director of National Intelligence (ODNI) is developing tools such as Scattered Castles, a system that tracks security clearances, and a new tool called Transparency of Reciprocity Information System (ToRIS).

ToRIS will address the reciprocity challenges across intelligence agencies and allow employees’ clearances to follow them when transferring between departments.

“It’s kind of like the one missing puzzle piece that sits in between the IC elements. It’s going to fill the data sharing gap between them so we can move people between IC elements faster, Eanes said.

ODNI just recently received funding for ToRIS — there are no timelines for the development of ToRIS yet, but there are tentative dates for the project’s development. 

Eanes said the dates are “going through coordination right now and will be posted on performance.gov “in a couple of weeks.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

will-your-ai-bot-put-citizens-at-risk?

Will your AI bot put citizens at risk?

With Congress recommending both guardrails and a “full steam ahead” mindset for federal artificial intelligence deployments, agencies will feel the pressure to deliver AI-enabled services to citizens quickly. But how do they know their bots will not introduce harm and put individual team members, their organizations and the citizens they serve at risk?

Government agencies have an obligation to provide accurate information to citizens, and a bad bot can have both legal and moral implications. Last year, for example, the IRS was cited by the Government Accountability Office for its use of AI in flagging tax returns for audit, after the technology was found to possibly include unintentional bias. The IRS had humans in the loop with this system, but other guidance from the executive order and other directives appeared not to have been implemented at the time the potential for bias was discovered.

The IRS incident is a reminder of how important it is for agencies to do everything possible to avoid risk to citizens and safeguard government and personal data, before risk becomes reality. That may sound daunting, but federal guidance and frameworks highlight what is needed, including understanding AI risks, having DevOps and DevSecOps teams operate concurrently, establishing an independent red team that ensures the model delivers the highest quality results, and more, even if details on how to do this are not as clear. However, leaning on best practices already defined in data security and software development overall provides a clear path for what is needed to ensure AI does not introduce risk.

Keep risk front and center

Validating AI can be daunting because many AI models make a tradeoff between accuracy and explainability — but it’s necessary to mitigate risk. Start by asking questions that quality assurance (QA) would ask about any application. What’s the risk of failure, and what’s the potential impact of that failure? What potential outputs could your AI system produce? Who could it present them to? What impact might that have?

A risk-based approach to application development isn’t new, but it needs to be reinforced for AI. Many teams have become comfortable simply producing or buying software that meets requirements. Additionally, DevOps processes embed quality and security testing into the process from the beginning. But since AI requires taking a hard look at ways the system might “misbehave” from its intended use, simply applying the current QA processes is the wrong approach. AI cannot simply be patched if it makes a mistake.

Adopt an adversarial mindset

Red teams are routinely deployed to uncover weaknesses in systems and should be used to test AI, but not in the same manner as with traditional application development. An AI red team must be walled off from the day-to-day development team and their success and failure.

AI red teams in government should include internal technologists and ethicists, participants from government-owned laboratories, and ideally, trusted external consultants — none of whom build or benefit from the software. Each should understand how the AI system may impact the broader technology infrastructure in place, as well as citizens.

AI red teams should work with an adversarial mindset to identify harmful or discriminatory outputs from an AI system along with unforeseen or undesirable system behaviors. They should also be looking specifically for limitations or potential risks associated with misuse of the AI system.

Red teams should be free of the pressures of release timing and political expectations and report to someone in leadership, likely the chief AI officer (CAIO), who is outside of the development or implementation team. This will help ensure the effectiveness of the AI model and align with the guardrails in place.

Rethink validation to development ratio

Advances in AI have brought massive improvements in efficiency. A chatbot that might have taken months to build can now be produced in just days.

Don’t assume AI testing can be completed just as quickly. Proper validation of AI systems is multifaceted, and testing time to development time ratio will need to be closer to 70% to 80% for AI rather than the typical 35% to 50% for enterprise software. Much of this uplift is driven by the fact that the requirements are often brought into sharp relief during testing, and this cycle becomes more of an “iterative development mini cycle” rather than a traditional “testing” cycle. DevOps teams should allow time to check training data, privacy violations, bias, error states, penetration attempts, data leakage and liabilities, such as the potential for AI outputs to make false or misleading statements. Additionally, red teams need their own time allotment to make the system misbehave.

Establish AI data guidelines

Agencies should establish guidelines for which data will and will not be used to train their AI systems. If using internal data, agencies should maintain a registry of the data and inform data generators that the data will be used to train an AI model. The guidelines should be particular to each unique use case.

AI models don’t internally partition data like a database does, so data trained from one source might be accessible under a different user account. Agencies should consider adopting a “one model per sensitive domain” policy if their organization trains AI models with sensitive data, which likely applies to most government implementations.

Be transparent about AI outputs

AI developers must communicate what content or recommendations are being generated by an AI system. For instance, if an agency’s customers will interact with a chatbot, they should be made aware the content is AI-generated.

Similarly, if an AI system produces content such as documents or images, the agency might be required to maintain a registry of those assets so that they can later be validated as “real.” Such assets might also require a digital watermark. While this isn’t yet a requirement, many agencies already adopt this best practice.

Agencies must continually monitor, red team, refine and validate models to ensure they operate as intended and provide accurate, unbiased information. By prioritizing independence, integrity and transparency, models built today will provide the foundation agencies need to improve operations and serve citizens while maintaining the public’s safety and privacy.

David Colwell is vice president of artificial intelligence and machine learning for Tricentis, a provider of automated software testing solutions designed to accelerate application delivery and digital transformation.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.