Skip to main content
File upload

AI in upload journeys

A starting point, not an answer

We are still exploring where AI might help with file upload journeys and where it could just as easily get in the way. This is early thinking, shared openly so others can challenge it, add to it, or take it in a different direction.

Upload as an active step, not a hand off

In one example we’ve looked at, users upload several documents, and the system immediately produces a summary or structured view they can work with.

Users review that output, correct anything that doesn’t look right, and use it to decide how to move forward. In this model, the AI isn’t making decisions behind the scenes; it’s supporting users to make sense of what’s been uploaded before the journey continues.

The AI output is treated as a starting point, not an answer. The summaries and suggested structures are there to work with; users can change them, ignore them, or decide they’re not useful.

That matters because AI will get things wrong. It can misread documents, miss details, or make assumptions that don’t quite hold up.

Handled this way, uploading stops being a one-way hand-off. It becomes a moment where people can pause, make sense of what they’ve added, and decide what to do next before the journey moves on.

Optional feedback during upload

There are also areas where we haven’t seen AI in use yet, but can see where it might add value based on what we know about upload journeys, where users often get stuck, and where the risks of using AI feel relatively low.

In some services users upload evidence that technically meets the rules, the file type is accepted, the upload works, but still doesn’t give a human agent what they actually need.

Someone might upload a utility bill when a payslip was asked for, or send an image that’s too blurry to read. These problems often surface only later, leading to delays, follow-up contact, and frustration on both sides.

Handled carefully, AI might help earlier in the journey, not by making decisions, but by doing some light sense-checking at the point of upload, for example, flagging that a document doesn’t seem to match what was asked for, or that an image looks hard to read. That gives people a chance to fix things straight away, rather than finding out later.

The challenge is making sure this feedback is clearly seen as guidance, not a decision.

If that boundary isn’t obvious, people may assume their application is approved when it isn’t, even though an agent still needs to review the evidence.

We haven’t seen this working in services yet. It’s an area where careful experimentation, clear language, and strong expectation-setting will matter greatly.

When not to use AI in upload journeys

There are also situations where using AI is unlikely to help, or may actively make the experience worse.

If a task is simple and well understood, AI support may introduce unnecessary complexity rather than add value. And where decisions need to be definitive or accountable, such as whether evidence is acceptable, complete, or sufficient to progress a claim, a human must make the final decision and remain responsible for it.

Any use of AI here needs to earn its place. It should help people understand what’s going on, stay in control, and get through the task without adding extra friction.

What we're asking

If you're working with uploads, we'd really like to hear how this lines up with your experience:

  • What feels familiar?
  • What doesn't quite land?
  • What have you learned that might be useful for others? This is shared to help build a shared understanding over time, not to land on fixed answers.

Could we improve this page?

Send questions, comments or suggestions to the DWP Design System team.

Last updated: