Periodic Reporting for period 4 - H-Unique (In search of uniqueness - harnessing anatomical hand variation)
Periodo di rendicontazione: 2023-07-01 al 2024-12-31
Hard biometrics, such as fingerprints, are well understood and some soft biometrics are gaining traction within both biometric and forensic domains. A combinatorial approach of soft and hard biometrics has not been previously attempted from images of the hand. We have pioneered the development of new methods to release the full extent of variation locked within the visible anatomy of the human hand and reconstructed its discriminatory profile as a multimodal biometric. A significant step change was required in the science to both reliably and repeatably extract and compare anatomical information from large numbers of images, accommodating position, lighting and resolution.
Large datasets are vital for this work to be legally admissible. Through citizen engagement with science, this research has collected images from over 5,000 participants, creating an active, open source, ground-truth dataset. Algorithms have been designed to permit auto-pattern searching across large numbers of stored images of variable quality, with the aim of providing a major novel breakthrough in the study of anatomical variation, with wide ranging, interdisciplinary and transdisciplinary impact.
Our key objectives were (i) To establish variability in the human hand to better understand variation. (ii) To create new algorithms to both reliably and repeatedly extract anatomical features from images. (iii) To determine the extent to which variation in hand position and image quality alters the ability to recognise features of hand anatomy. (iv) To undertake black and white box testing, to establish a hierarchy of hand biometrics. (v) To retro-engineer a multimodal biometric to represent and visualise hand variation thereby establishing uniqueness.
Through the development of our outreach strategy, we established two large ground truth image datasets and a 3D dataset, including collection systems, infrastructure and databases. We developed a web-based application for data collection for our Large-Scale Citizen Science Dataset, which contains images from over 5000 individuals from both hands and various poses to allow for examination of our data extraction methodology. This set contains over 50,000 images, making it the largest hand image dataset in the world by a large margin. We have built a multi-camera rig for our High-Quality Dataset, which allows capture of colour and infrared images near simultaneously, allowing for comparison between the modalities, as well as mobile capture. This set has over 15,000 images from both hands of 650 volunteers in various poses. We built a further multi-camera photogrammetry rig for our 3-dimensional (3D) Hand Dataset, along with algorithms to reconstruct high-definition 3D data. This dataset contains detailed photographs and 3D reconstructions of the hands of 50 people.
To facilitate data capture and ensure public awareness of our work, we have developed and implemented plans for continuous engagement with periodic events, including press releases, television and radio interviews, newspaper articles, a blog and demonstrations of our work at expos and conferences, as well as numerous research articles and conference presentations.
A key objective was to develop the ability to extract key features from photographs of the hand, including superficial veins, knuckle and palmar creases, pigmentation, scars and lunules. We have developed new state-of-the-art (SOA) approaches for vein pattern tracing, extraction and mapping in two modalities, as well as crease and pigmentation extraction. We have developed localisation techniques to find and identify the hand as well as key regions from any image (regardless of scene, camera and quality) including knuckles and joints, punctate pigmentation, fingernails, and lunules. Our accuracy is world-leading and has on many occasions surpassed the SOA.
In addition to building ultra high-resolution 3D reconstructions from photogrammetry, we have developed new and world-leading techniques for achieving 3D reconstruction of the hand from single images, including simultaneous determination of the surface and texture. We have developed work in reposing to allow us to simulate changes in hand texture, allowing us to examine the impact of movement on our feature extraction algorithms.
We have developed white-box and black-box feature comparison methodology, including for punctate patterns (pigmentation and scars), curvilinear structures (skin creases), and graph structures (vein patterns and pigmentation clusters). Several studies have been conducted to investigate the relative contribution of different anatomical constructs, and we have developed methodology to determine the hierarchy of features and their respective contributions to identification. We have completed work in uncertainty estimation and its impact on identification, which is important for adoption of the work into practice.
We have published 12 publications in world-leading journals and conferences, including Computer Vision and Pattern Recognition (CVPR), ranked number 2 publication in the world according to Google Metrics, as well as the European Conference on Computer Vision (ECCV), Pattern Recognition, International Joint Conference on Biometrics (IJCB) and Sensors. We have demonstrated the work through demonstrations at major venues, including CVPR, ECCV, and the European Association of Biometrics.
Our vein segmentation and map extraction algorithms improve considerably over the SOA. We have developed the first method for vein map extraction, demonstrating its improvement over the SOA. We have developed the first method for automatically detecting and labelling knuckle regions and fingernails, allowing mapping of the hand for analysis, demonstrating our improvement over the SOA. Our weakly-supervised segmentation work improves beyond the SOA by a large margin in multiple tasks, which is particularly important for extraction of tattoos, lunules, jewellery and scars.
Our feature comparison work has broken new ground and improved over the SOA. We have developed methods for graph matching and object comparison, both of which achieve superior results to the SOA. We have developed holistic knuckle crease comparison and world-leading work in vision-language models, as well as the first anatomical feature hierarchy for visible features on the hand, allowing us to understand for the first time the contribution of each anatomical construct to identification.