Possible to reconsider the format of the dataset from Parquet to COG?
Dear @rbaveryw, the dataset was meant to use for training purposes. Let me discuss with the team what could be done to improve its usability.
@rbaveryw Hi Ryan just an interested person working a bit with LLMs and models. Would love to join you on your endeavour to lend a hand if needed or to learn about this. I am very intested in applying some ideas to the Dataset for agriculture contexts. Thanks in advance, have a good day
@jmarintur Hola Javier super intersado en aprender y ser parte del equipo si fuera una oportunidad existente, soy ingeniero de software (full stack) del stack de Typescript. (se utilizar Python tambien).
@jmarintur
I just noticed that every patch in a parquet file, e.g., train-00000-of-03676.parquet, has no unique bound. So it will be impossible
to reconstruct as a geospatial format (e.g., parquet -> COG). Is this the issue with the bounds u mentioned in the README? Or was it something on purpose?
Thanks!
Hi @csaybar , this is the issue we mention in the README, indeed. Going from parquet to COG, it's not recommended due to the huge amount of samples we have. In a few days, we will provide the correct bounds for each of the images so anyone can create a geotiff file. We are also currently working in hosting the data elsewhere in STAC format. Thank you for your interest!
Hi @jmarintur , Is there any updates on unique geo-metadata for each image?
Hi @jmarintur this is the original author of the question (with the correct HF account). Any updates on a STAC catalog? Alternatively, I think Zarr would be a better format for high performance dataloading that still maintains array metadata to enable easy indexing,slicing, and plotting. Zarr would be less interoperable with GIS viewers but would better support those who want to fine-tune on Satellogic. https://guide.cloudnativegeo.org/zarr/zarr-in-practice.html