Posted in: Wealth Management Administration
Moving to a new platform is a huge undertaking. Matt Dodds, Director at ITM, discusses how to successfully mitigate the data and technology risks of migrations.
Moving to a new platform is a huge undertaking. Moving significant amounts of data, the core of any financial product, carries high risks – to the integrity of data, for data protection, and for ongoing administration and customer experience. Mitigating this with robust, configurable technology and detailed data analysis removes much of the risk from migrations and the associated change programmes.
Successful migrations involve vast amounts of work yet sit firmly outside ‘business as usual’ activity, requiring specialist resources and technology. Achieving the balance between maintaining service levels and giving migration the resource and attention that it requires is difficult to get right. This is why many people involve specialist support to avoid the well-documented downfalls of an unsuccessful migration.
The first phase on any migration is planning. There are usually stakeholders from multiple suppliers involved, leading to the term ‘discovery;’ being widely adopted for the initial phase of a migration. Gathering requirements, identifying dependencies and interactions and quantifying risks tend to be the most intensive parts of a migration programme. Important relationships are also built during this phase, and often need to last the duration of the programme, so shouldn’t be overlooked.
During discovery our key goal is to establish how we will take data from one architectural layout, transform it and place it onto the new platform. Meeting the requirements of both the end user and the long-term proposition, whilst mitigating as many risks as possible is difficult. Add to this that the landscape that the migration is taking place within is often constantly changing at this stage. It becomes easy to see why migrations often don’t proceed as planned.
During discovery some of the key decisions will include:
These decisions play a key role in how the programme is structured and delivered. Once pragmatic plans are agreed, the heavy lifting of building, mapping and analysing data for migration can begin, which is where configurable, specialist technology and expertise becomes most valuable.
Migrating incomplete or poor-quality data to a new platform can undermine the entire business case. Administration will be more difficult, customer experience will suffer, which in turn leads to extended timescales and increased costs to deliver the stated success criteria. The greater the understanding of data quality before, during and after migration, the greater the chances of success.
Substantial automated analysis of source data provides great insight into where problems may lie – remember that what’s been ‘good enough’ on the current platform may not suffice in your new world. Identifying required data items that are missing, items that cannot be safely migrated in the current state, or that should be cleansed or transformed as part of the programme, will be invaluable to success.
Categorise data and assign risk-ratings, then create cleanse plans to help focus attention on those items which could have the biggest impact. We focus on how data can be cleansed through various processes – leveraging intuitive technology where possible. In our experience, running monthly data analysis with fresh source data and management information reporting allows you to compare results to identify trends or process failures. Remember that business as usual carries on regardless during a migration, so constantly reassessing the current state reduces the risk of surprises. Regular periodic data analysis enables a robust data cleanse workstream and also progress tracking towards the end-goal.
Mapping the data is an iterative process that should continue throughout. The initial data mapping and coding process is completed to benchmark a first cut of code which is used to produce a preliminary load file. From here there are multiple tests and adjustments made using reconciliation and load processes. At outset look to validate some initial assumptions and provide a view of how the data will look on the target platform. The benefit of using technology built to allow changes to be made at any stage of the process, and to be tested in isolation as well as end to end, is the agility and flexibility that it provides.
Testing is crucial for all migration yet is often not as comprehensive as it should be. The number of individual testing phases will depend on the length of the programme and multiple dependencies. The key is to balance the need to establish a number of phases, while allowing enough time in each phase for the testing cycle to deliver maximum value. You should expect to implement 3 types of testing:
During a phased approach that works around the complexities and in-flight projects you may have, each phase of migration can be a lesson – enabling you to refine your plans as you go.
All the preparation completed to date is designed for a frictionless migration. Testing is only ever a simulation and there can be bugs and issues that arise after implementation. The time that follows is focussed on any necessary rectification through the deployment of code changes and standalone data updates. There is then a handover as BAU operations take over from the programme.
It is a complex challenge, and as with most things this complicated, the implications of getting it wrong can be huge. The importance of configurable technology, experienced experts and a focus on the key risks and mitigations, will go a long way to delivering a seamless migration for providers and customers alike.