Data, AI, and Digital Resilience: Building Secure, Intelligent Systems for a Complex World
Federal agencies are under pressure from every direction. Cyber threats are increasing in volume. Data environments are growing more complex. AI is moving from experimentation into everyday workflows. At the same time, leaders are being asked to improve performance, strengthen security, and modernize operations without unlimited time, staff, or budget.
That combination is forcing a more practical conversation about what resilient digital systems actually require. It is no longer enough to talk about AI in abstract terms or treat cybersecurity as a separate technical function. Leaders have to think about the full operating picture: data quality, governance, hybrid infrastructure, access controls, workforce readiness, and the real conditions under which teams are expected to deliver.
That broader reality shaped Leadership Connect’s webinar, “Data, AI, and Digital Resilience: Building Secure, Intelligent Systems for a Complex World“, in partnership with Splunk & Amazon Web Services (AWS), which brought together perspectives from government and industry on how organizations are approaching AI, data, and security in 2026. The discussion stayed grounded in implementation. Rather than focusing on distant possibilities, the panel examined what is working now, where agencies are running into friction, and what leaders need to get right if they want secure, intelligent systems to hold up under pressure.
Couldn’t attend the session live? Register to watch the whole webinar here and make sure to follow our events page to get in on the next conversation. Below are the key themes that shaped the discussion!
AI is delivering value first through practical, everyday work
One of the clearest messages from the discussion was that AI’s near-term value often comes from ordinary tasks, not headline-grabbing breakthroughs. While AI can support sophisticated cyber and observability use cases, panelists pointed repeatedly to more immediate gains in productivity and workflow support.
In security operations, AI is helping teams understand detections faster, automate parts of remediation, and work through playbooks more efficiently. It is also helping technical users make sense of complex tools by translating queries, workflows, and troubleshooting steps into more natural language. That matters in environments where learning curves are steep and teams do not have time to master every platform from scratch.
At the same time, AI is proving useful well beyond the SOC. The panel discussed meeting summaries, note review, spreadsheet consolidation, and other routine administrative work that consumes time but does not always require deep human judgment. Those use cases may sound modest, but they are highly relevant in government settings where staff are overloaded and every hour matters.
The panel’s view was not that AI replaces people. It was that AI becomes most useful when it reduces friction, improves understanding, and lets staff focus their attention where judgment matters most.
Better AI starts with better data, and better data starts with context
As quickly as AI capabilities are evolving, the conversation repeatedly came back to a more foundational truth: data quality still determines outcomes. Agencies may be eager to scale AI, but if the underlying data is inconsistent, poorly labeled, or hard to interpret, those efforts will struggle.
That challenge starts with classification. Panelists described how difficult it can be to create clear categories for data, especially when dealing with controlled unclassified information, PII, PHI, procurement-sensitive information, and other overlapping forms of sensitivity. The issue is not only whether data is sensitive, but why it is sensitive, how it is being used, and what protections should follow from that context.
This is where the discussion moved beyond static classification models. Agencies are increasingly thinking in more dynamic, risk-based terms. Rather than relying on simple category labels, they are aligning data handling with usage, storage, access, monitoring, and encryption requirements. That is especially important when data moves across systems and environments.
The panel also emphasized that context matters. Not all PII carries the same risk in every circumstance. An email address in a signature block is different from a list of employee contact details exposed more broadly. That kind of nuance is essential for both governance and practical implementation, and it is one reason fully automated classification is not enough on its own.
Human judgment and machine support have to work together
The discussion made clear that agencies are not choosing between manual processes and full automation. The more realistic model is a hybrid one, where machine learning helps identify likely classifications, patterns, or actions, and humans validate, adjust, and apply judgment.
That approach came through strongly in the panel’s discussion of data tagging and training workflows. Machine assistance can reduce inconsistency and speed up labor-intensive tasks, especially in email and document environments. But it still needs human review, particularly when data categories overlap or when the right answer depends on context.
This same principle came up in cybersecurity operations. AI can help analysts understand alerts, generate actions, and navigate complex platforms, but it still needs a human in the loop. Panelists consistently framed AI as a partner that boosts productivity and learning, not as an independent actor that should be left unchecked.
That distinction matters for leadership teams. The real opportunity is not to remove humans from the process. It is to design systems where people and AI contribute different strengths, with enough oversight to preserve trust, accountability, and control.
Training data and governance are becoming make-or-break issues
Another major throughline was the importance of training data and governance. Leaders may want to move quickly on AI pilots and use cases, but poor training inputs and weak governance can turn promising efforts into disappointing implementations.
The panel described several forms of this challenge. Sensitive data often cannot be used freely for model training, especially when external vendors or third-party tools are involved. De-identification helps, but it is difficult to do well. Teams also need both positive and negative examples when training classification systems, which forces them to define categories more precisely than they may have before.
At the same time, agencies do not always know where their data originated, how it has changed over time, or whether it has been labeled consistently. That creates risks not only for quality, but also for bias, ethical use, and accountability. If organizations do not understand the lineage and condition of their data, they will struggle to trust the outputs of the systems built on top of it.
The panel’s message was that governance is not a side issue to address later. It is part of AI readiness from the start. Agencies that want reliable outcomes will need stronger processes for data provenance, labeling, normalization, oversight, and ongoing stewardship.
Hybrid environments require visibility, guardrails, and oversight
The webinar also highlighted how much complexity comes from operating across multiple systems, data sources, and environments. Agencies are managing data across on-prem systems, cloud platforms, and organizational boundaries, often with fragmented visibility and uneven controls.
This makes access architecture a strategic issue, not just a technical one. Panelists discussed the importance of deciding whether data needs to be centrally ingested or whether it can be searched remotely. They also distinguished between architectural questions like data residency and broader issues involving legal jurisdiction and sovereignty. In either case, the core challenge is the same: agencies need secure, controlled, usable access without creating unnecessary exposure.
That becomes harder when data is fragmented or inconsistent across sources. The panel spent meaningful time on the reality that similar data from different systems often conflicts. Teams have to reconcile discrepancies, revalidate facts, and work through ambiguity before they can trust what they are seeing. In high-stakes environments, that work takes time, but it is essential.
The discussion also underscored that cloud adoption does not remove responsibility. Agencies still need clear policies on data ownership, access, and use. Handing systems to a provider does not eliminate the need for federal oversight, monitoring, and governance.
Zero trust is not just a framework. It is an operating discipline
Zero trust was another major theme, especially in how it shapes access decisions in data-rich and AI-enabled environments. Least privileged access came up repeatedly as a practical safeguard, not just a theoretical principle.
That applies to people, but it also increasingly applies to non-human identities. As organizations introduce AI agents and assistive tools, they are expanding the number of systems and services interacting with enterprise data. Those tools need permissions too, which means leaders have to think carefully about what access is granted, for how long, and under what controls.
The panel described several useful guardrails, including time-bound access, tighter authentication, micro-segmentation, and stronger control over who or what can access sensitive data. The idea is not to block data use entirely. It is to make access more intentional, more limited, and easier to monitor.
This is especially important because the threat environment is not static. Panelists described both increased attack volume and emerging concerns tied to AI-enabled systems, including the risk of unauthorized tools operating outside approved governance structures.
Shadow AI is emerging as a real leadership challenge
One of the most timely concepts in the discussion was the rise of shadow AI. Just as organizations once had to contend with shadow IT, they are now facing situations where employees or contractors experiment with AI tools, models, or agents outside formal oversight.
This is not always malicious. In many cases, it reflects initiative and a desire to solve problems faster. But it still creates risk. If a tool is deployed without registration, governance, or clear ownership, leaders may not know what data it can access, what permissions it has, or how its outputs are being used. That increases the attack surface and weakens accountability.
The panel framed this as both a governance and visibility issue. Strong policies are necessary, but so is the ability to detect unauthorized model use and understand how AI systems are interacting with enterprise environments. Without that visibility, organizations may not realize where risk is accumulating until it is too late.
For leaders, this is an important shift. AI adoption is no longer only about enabling new capabilities. It is also about establishing guardrails early enough to prevent uncontrolled sprawl.
Doing more with less may begin with doing less on purpose
A particularly strong leadership theme in the webinar was the recognition that organizations cannot simply pile AI and cybersecurity demands on top of already overloaded teams and expect better results.
Panelists offered a more realistic view. Productivity gains come not only from new tools, but from prioritization, simplification, and creating space for teams to learn. That may mean pausing or reducing certain lower-value activities so staff can absorb new workflows, experiment responsibly, and build capabilities that matter over time.
The discussion also emphasized the importance of explaining the why. When teams understand why priorities are shifting, what success looks like, and what matters most, they are better able to focus on higher-impact work. Without that clarity, organizations risk treating everything as urgent and exhausting staff without improving outcomes.
This part of the conversation was one of the most practical. AI adoption is not only a technology challenge. It is a management challenge. Leaders have to make room for change, not just announce it.
Pilots work best when they are focused, used, and easy to stop
The panel also offered useful perspective on pilots. Short, focused pilots were described as a practical way to test whether a use case actually fits mission needs without locking an agency into years of effort.
Three months was discussed as a reasonable testing window once a system is up and running, but panelists were careful to note that usage matters more than calendar time alone. A pilot that technically runs for three months but never gets real engagement does not provide much insight. Leaders need enough user activity and enough operational exposure to judge whether the tool is working.
The value of pilots, in this framing, is not just validation. It is also discipline. They help agencies avoid force-fitting tools that do not match the mission, and they create room to pivot before too much time or money is committed. In a fast-moving AI environment, that matters. A multiyear evaluation cycle is often too slow for the technology landscape being assessed.
Policies and executive orders still need operational translation
The final theme was the gap between direction and execution. Executive orders and high-level policy guidance are highly influential, particularly in areas like AI, zero trust, and data protection. But the panel was candid about the fact that translating those requirements into operational reality can be difficult.
High-level directives often arrive before detailed implementation guidance. Agencies still have to interpret them, map them to business processes, and convert them into controls, workflows, and accountability mechanisms. That work takes time and often varies by agency depending on mission, maturity, and internal constraints.
From an oversight perspective, the real question is whether agencies are turning policy into measurable outcomes. That includes governance structures, business processes, and controls that can actually be observed and assessed. In that sense, policy matters not only as direction, but as a foundation for practical execution.
What leaders can apply now
Taken together, the discussion points to a grounded set of lessons for leaders navigating AI, security, and digital resilience.
The first is to invest in foundations before scaling ambition. Data quality, classification, provenance, and governance are not secondary issues. They shape whether AI efforts produce useful, trusted results.
The second is to focus on practical use cases. Workflow automation, meeting support, classification assistance, and operational learning may not sound transformative, but they are where many teams are seeing real value today.
The third is to treat access and governance as living disciplines. Least privilege, stronger authentication, oversight of non-human identities, and visibility into shadow AI activity all matter more as AI becomes embedded in enterprise operations.
The fourth is to make room for adoption. Teams need time, clarity, and prioritization if they are going to use these tools well. That may mean doing less for a period of time in order to build sustainable capability.
Finally, leaders should treat pilots as learning mechanisms, not just procurement exercises. Clear criteria, real usage, and the ability to stop or pivot quickly are essential in a fast-changing environment.
Continue the Conversation
Watch the on demand webinar to hear the full discussion and explore additional Leadership Connect resources on AI adoption, data strategy, and public sector mission delivery. Stay connected to upcoming events as we continue convening leaders across government and industry to share practical lessons on scaling innovation responsibly.
To learn more about Leadership Connect and access additional insights from government and industry leaders, visit our website and explore our products!



