University of Milano-Bicocca Showcase
Browse datasets by departments
Find research data from University of Milano-Bicocca
Filter Results
116 results
- Alessia Rota, PhD Thesis, Supplementary FilesPhD Thesis Supplementary Files to the Chapter 3 (Version 1). Supplementary File 1: Metadata of all the samples collected, including day and time of sampling, day/night distribution, season, geographic coordinates, distribution among macro-zones and micro-zones. Supplementary File 2: List of annotated bony fish and elasmobranch taxa detected in the dataset. Supplementary File 3: List of annotated invertebrate taxa for the nine selected phyla, namely: Annelida, Arthropoda, Bryozoa, Cnidaria, Ctenophora, Echinodermata, Mollusca, Chlorophyta, Haptophyta.
- Parmegiani Andrea, Phd Thesis, Supporting video Supporting_Video for PhD thesis entitled: Analysis of shark aggregations and ecology in the maldives. Assessing a protocol for the survey of the species by the use of non invasive methods. From Parmegiani Andrea PhD student XXXVIII cycle in Terrestrial and Marine Environmental Sciences, University of Milano Bicocca, 2025.
- The effect of cognitive load on information retention in working memory: Are item order and serial position different processes?Raw data of the study
- Restricted AccessAn auditory-mediated communication paradigm for evaluating individual needs and motivational states in locked-in patients.The stimulus set was used in the ERP paper "Decoding Motivational States and Craving through Electrical Markers for Neural 'Mind Reading’ by Proverbio AM & Zanetti A (2025). The aim of this study was to identify electrical neuromarkers of 12 different motivational and physiological states (such as visceral craves, affective and somatosensory states, and secondary needs) in LIS, coma, or minimally conscious state patients. Auditory stimuli were designed by combining a human expressive voice with a background sound to evoke a context related to the targeted needs. The stimuli included: primary or visceral needs (hunger, thirst, and sleep), homeostatic or somatosensory sensations (cold, heat, and pain), emotional or affective states (sadness, joy, and fear), and secondary needs (desire for music, movement, and play). 17 audio clips were recorded for each micro-category, each replicated twice: once with a male voice and once with a female voice, totaling 408 stimuli. Audacity software was used to combining the vocal track with a background context coherent with the verbal content. Human voices were recorded using Microphone 202 K38 by Hompower (SNR = 80 dB). The semantic content, the prosodic intonation and the emotional tone of all voices were coherent and appropriately matched. Some of the background sounds were recorded using the same microphone, while others were sourced from the publicly accessible BBC Sound Effects library for scientific purposes (https://sound-effects.bbcrewind.co.uk/search). Research was funded by ATE – Fondo di Ateneo No. 31159-2019-ATE-0064, University of Milano-Bicocca. The research project, entitled “Auditory imagery in BCI mental reconstruction” was preapproved by the Research Assessment Committee of the Department of Psychology (CRIP) for minimal risk projects, under the aegis of the Ethical committee of University of Milano-Bicocca, on February 9th, 2024, protocol n: RM-2024-775).
- Knockin’ on H(e)aven’s door. Financial crises and offshore wealthReproduction Package for Knockin’ on H(e)aven’s door. Financial crises and offshore wealth by Silvia Marchesi and Giovanna Marcolongo. For further information see the README file
- IN2SIGHT_2023_Data_Microlens-testingThis data set contains the data collected under the IN2SIGHT EU project (GA. 964481) for the test of different types of microlenses used for the project. These data refer to the year 2023. The microlenses have been tested with a USAF1951 target to evaluate the OSF and the MTF and have been used in imaging when implanted or on cells. A specific set of data regards the use of agarose phantoms containing fluorescent microbeads to evaluate the PSF in 3D. Each folder contains README.txt files and a "summary.pptx" graphical summary. The full list of folders' tree can be found in the file Implants_MetaFile_2023.xlsx The file "Metafile_Ulenses_2023.xlsx" contains the description of the folders tree with comments. The same file in the tab README file reports the type of microlenses investigated in vitro and in vivo. Details on the work done can be found in the deliverables of WP2 and WP4 of the project available at the EU web site https://cordis.europa.eu/project/id/964481/reporting.
- The normal Lymph Node. Supplemental DataNormal human lymph nodes and tonsils, multiplexed with the MILAN technique, analyzed with BRAQUE.
- AttiFood DatabaseAttiFood Picture Database
- Raw Mapping dataRaw data acquired with a Bruker IRIS hyperspectral scanner. Spectral reflectance data between 400 and 2500 nm. X-ray fluorescence data.
- SegFVG: A High-Resolution Large-Scale Dataset for Building Segmentation from Aerial Imagery in Northeastern ItalyAccurate building extraction from high-resolution aerial imagery is essential for numerous applications in remote sensing, urban planning, and disaster management. While AI-based methods enable fast, scalable, and cost-effective segmentation of building footprints, their development is often limited by the scarce availability of large-scale, geographically diverse datasets with reliable pixel-level annotations. In this work, we present SegFVG, a large-scale, high-resolution, and geographically diverse dataset for building segmentation, focused on the Friuli Venezia Giulia region in northeastern Italy. The dataset includes over 15,000 true orthophoto aerial image tiles, each of size 2000 × 2000 pixels with a ground sampling distance of 0.1 meters, paired with precise pixel-level building segmentation masks. Covering approximately 616 square kilometers, SegFVG captures a broad spectrum of urban, suburban, and rural settings across varied landscapes, including mountainous, flat, and coastal areas. Alongside the dataset, we provide benchmark results using several deep learning models. These support the usability of SegFVG for the development of accurate segmentation models and serve as a baseline to accelerate future research in building segmentation.
1

