Replaying edits
Once we have extracted the edits that have happened to a neuron (see here), it can be helpful to replay them in order to see how the neuron has changed over time.
Extract the edits and initial state of this neuron¶
import networkx as nx
from tqdm.auto import tqdm
from caveclient import CAVEclient
from paleo import get_initial_graph, get_root_level2_edits
As in the previous example, we'll start by extracting the edits to a neuron.
root_id = 864691135639556411
client = CAVEclient("minnie65_public", version=1078)
networkdeltas = get_root_level2_edits(root_id, client)
This time, we'll also use paleo.get_initial_graph
to get the level2 graph connectivity
for all objects that participate in this neuron's edit history. This will allow us to
replay the edits in the context of the full segmentation graph.
initial_graph = get_initial_graph(root_id, client)
Replaying the edits over the level2 graph¶
The simplest thing we can do now is to replay the edits in order. paleo
provides the
apply_edit
function that takes in the graph and an edit and applies it to the graph.
Note that this modifies the graph in place.
from paleo import apply_edit
deltas = list(networkdeltas.values())
graph = initial_graph.copy()
for delta in tqdm(deltas, disable=False):
apply_edit(graph, delta)
As a sanity check, we might want to compare the graph that we got from replaying edits
from the original, to the actual graph that we'd get from client.chunkedgraph.level2_chunk_graph
.
To do so, we need to also know a point on the object of interest to use as an anchor point -
this is because typically graph
will be composed of many connected components, but only
one of them corresponds to the current state of our neuron.
from paleo import get_nucleus_supervoxel
nuc_supervoxel_id = get_nucleus_supervoxel(root_id, client)
nuc_level2_id = client.chunkedgraph.get_roots(nuc_supervoxel_id, stop_layer=2)[0]
neuron_component = nx.node_connected_component(graph, nuc_level2_id)
neuron_graph = graph.subgraph(neuron_component)
computed_edgelist = nx.to_pandas_edgelist(neuron_graph).values.astype(int)
final_edgelist = client.chunkedgraph.level2_chunk_graph(root_id)
It's assuring to see that we at least have the same number of edges in both cases.
len(final_edgelist), len(computed_edgelist)
...and when we compare the actual edgelists element-wise, we see that they are the same.
import numpy as np
final_edgelist = np.unique(np.sort(final_edgelist, axis=1), axis=0)
computed_edgelist = np.unique(np.sort(computed_edgelist, axis=1), axis=0)
(final_edgelist == computed_edgelist).all()
Tracking neuron state over the edit history¶
Now, let's try keeping track of the state of the neuron at every point along this edit history.
This becomes just a bit more complicated: often the level2 ID corresponding to
a nucleus's location may change over time if there was an edit near that location. If
we want to keep track of the segmentation component corresponding to the nucleus
(or some other point) over this whole history,
then we need to know how this ID changes over time. paleo
provides the
get_node_aliases
function to help with this.
from paleo import get_node_aliases
node_info = get_node_aliases(nuc_supervoxel_id, client, stop_layer=2)
node_info
Now we have all the ingredients to replay the edits and keep track of the neuron's state.
def find_level2_node(graph, level2_ids):
for level2_id in level2_ids:
if graph.has_node(level2_id):
return level2_id
return None
# keep track of components that are reached as we go
components = []
# store the initial state
nucleus_node_id = find_level2_node(graph, node_info.index)
component = nx.node_connected_component(graph, nucleus_node_id)
components.append(component)
# after each edit, apply it and store the connected component for the nucleus node
for delta in tqdm(deltas, disable=False):
apply_edit(graph, delta)
nucleus_node_id = find_level2_node(graph, node_info.index)
component = nx.node_connected_component(graph, nucleus_node_id)
components.append(component)
from paleo import get_component_masks
l2_masks = get_component_masks(components)
l2_masks
Simplifying the process¶
The resolve_edit
function simplifies some of this boilerplate code by taking in the
graph, the edit, and a list of nodes to check to "anchor" the edit. In our case, this
was the level2 IDs corresponding to the nucleus point. It also simplifies the code if
we add an element to our deltas
dictionary mapping -1
to None
, which denotes the
original state of the neuron before any edits were applied.
from paleo import resolve_edit
# keep track of components that are reached as we go
components = []
# remember to include the initial state
networkdeltas = {-1: None, **networkdeltas}
# after each edit, apply it and store the connected component for the nucleus node
for edit_id, delta in tqdm(networkdeltas.items(), disable=False):
component = resolve_edit(graph, delta, node_info.index)
components.append(component)
The above syntax is helpful if you want to have some control over what happens at each stage of the process, or if you want to keep track of particular information at each stage. If you just want the level2 nodes or level2 graph at each stage, you can use the apply_edit_sequence
function, which is a wrapper around this resolve_edit
loop.
This method returns a dictionary mapping the edit ID to the state of the neuron after applying that edit. By default, this function will include the level2 nodes at each state of the neuron's history.
from paleo import apply_edit_sequence
nodes_by_state = apply_edit_sequence(graph, networkdeltas, node_info.index)
len(nodes_by_state[9028])
If you need to keep the actual connectivity of the level2 graph at each stage, then instead pass in return_graph=True
. This will return a dictionary mapping the edit ID to the level2 graph at that stage. This version is a bit slower since it makes a copy of the graph at each edit.
from paleo import apply_edit_sequence
graphs_by_state = apply_edit_sequence(
graph, networkdeltas, node_info.index, return_graphs=True
)
graphs_by_state[9028]