Mesh HKS Demo
Try out the heat kernel signature (HKS) pipeline on your data! Either input a cloud path to a mesh source or drag-and-drop a mesh file onto the viewer.
Clicking Compute HKS will run the condensed_hks_pipeline from meshmash on
your mesh and colour each vertex by its Heat Kernel Signature value or predictions from the HKS-based classifier.
Enter a cloud path and segment ID, then click Compute HKS
Cloud path — a publicly accessible Neuroglancer-compatible source URL (e.g. precomputed://gs://bucket/path).
Segment ID — the integer ID of the neuron segment to fetch and visualize.
Position (optional) — x, y, z coordinate (in mesh coordinates, usually nm) to select a mesh chunk to run on near that location. If blank, a random chunk is used.
Considerations
- If running on a large object, this demo will run on a small fragment of that object to keep IO and compute time reasonable. The pipeline will use the cursor position to select the nearest point on the object in neuroglancer link mode, or a random point if no cursor is provided (rerun to sample a new random point). The underlying pipeline can be run on larger objects using the Python code.
- Many pipelines generate mesh fragments, and the details of how these are stitched together are important to create a (roughly) topologically correct representation of the object's surface.
- The pipeline has parameters (described in the paper) that were optimized for postsynaptic target classification in MICrONS; for other problems, you may want to play with these parameters.
-
If uploading your own mesh, meshes should have units of nanometers. Meshes in
.ply,.obj,.stl,.gltf, or.glbformat should work. Drag-and-drop onto the viewer or use the Upload mesh button. - Having issues? Open an issue or get in touch at ben.pedigo@alleninstitute.org.