Wednesday, July 20, 2016

An example of modelization of overlapping chromosomes


example of generation of overlapping chromosomes.Left: greyscale image (DAPI+Telomeres). Right: red label maps to pixels belonging to the overlapping domain, blue and yellow maps to pixel belonging to single chromosomes.

This image is used for illustration on kagle.

Tuesday, June 21, 2016

Generating images of overlapping chromosomes

Generating examples of two overlapping chromosome from single chromosomes:

Images of single chromosomes hybridized with with telomeric Cy3-probe and DAPI counterstained were extracted from the image of a metaphase:

DAPI stained chromesomes (left), (CCCTAA)3-PNA-Cy3  probes (Red)
The monochrome DAPI and Cy3 images were added. Single chromosomes were isolated:
All images are 131x128
Given two single chromosomes, examples of overlapping chromosomes were generated combining rotations and translations:


Each greyscale image of a pair of overlapping chromosomes, is associated with a mask containing labels. The pixels of value equal to 3 maps to the overlapping domain of the two chromosomes:



Then a dataset containing 2853 examples of overlapping chromosomes was generated and saved as a h5f file available here:


Once downloaded and uncompressed, the examples can be loaded as a hdf5 file as shown bellow:

Friday, June 17, 2016

Modelling intersection between polygons with geogebra.

Rotate the polygons using the points B and I. Move the yellow polygon with the sliders v and w

Sunday, March 13, 2016

Using TensorFlow 0.7 from python on an optimus linux laptop.

Checking the Nvidia driver installation:

First check the gpu presence


$ lspci -k | grep -A 2 -E "(VGA|3D)"
00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06)
        Subsystem: CLEVO/KAPOK Computer Device 5000
        Kernel driver in use: i915
--
01:00.0 3D controller: NVIDIA Corporation GK208M [GeForce GT 740M] (rev ff)
        Kernel modules: nouveau, nvidia
03:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73)

Check if the gpu can be harnessed

 For example with using glxgears. Without gpu, glxgears yields:

$ glxgears
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
453 frames in 5.0 seconds = 90.581 FPS
302 frames in 5.0 seconds = 60.288 FPS
XIO:  fatal IO error 11 (Resource temporarily unavailable) on X server ":0"
      after 3146 requests (3146 known processed) with 0 events remaining.



$ optirun glxgears
9891 frames in 5.0 seconds = 1977.995 FPS
10186 frames in 5.0 seconds = 2037.131 FPS
9953 frames in 5.0 seconds = 1990.527 FPS
9908 frames in 5.0 seconds = 1981.540 FPS
10129 frames in 5.0 seconds = 2025.626 FPS

glxgears is accelerated by a factor 20 at least.

Installing CUDA 7.5 and cuDNN

CUDA 7.5  was installed from manjaro repository using the Octopi GUI. cuDNN was installed using Octopi . This library has to be first downloaded from the nvidia developper website.

In manjaro linux, cuda is installed in /opt/cuda/ . To check cuda installation, samples files were build under user directory as explained from nvidia documentation.

The binary file deviceQuery can be launch using optirun:

$ optirun ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GT 740M"
  CUDA Driver Version / Runtime Version          7.5 / 7.5
  CUDA Capability Major/Minor version number:    3.5
  Total amount of global memory:                 2048 MBytes (2147352576 bytes)
  ( 2) Multiprocessors, (192) CUDA Cores/MP:     384 CUDA Cores
  GPU Max Clock rate:                            1032 MHz (1.03 GHz)
  Memory Clock rate:                             900 Mhz
  Memory Bus Width:                              64-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.5, NumDevs = 1, Device0 = GeForce GT 740M
Result = PASS

Running tensorflow 0.7 from python

Tensorflow was installed (version with gpu acceleration) according to the documentation in a virtual environnement. Then the latest version can be installed using:

pip install --upgrade tensorflow

According to the documentation, the tensorflow installation can be checked as follow:
$ python
...
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
Hello, TensorFlow!
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> print(sess.run(a + b))
42
>>>


However, to run properly the python interpreter must be run with optirun:

$ optirun python
Python 2.7.11 (default, Mar  3 2016, 11:00:04)
[GCC 5.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:900] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: GeForce GT 740M
major: 3 minor: 5 memoryClockRate (GHz) 1.0325
pciBusID 0000:01:00.0
Total memory: 2.00GiB
Free memory: 1.97GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:717] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 740M, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 1.0KiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 2.0KiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 4.0KiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 8.0KiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 16.0KiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 32.0KiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 64.0KiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 128.0KiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 256.0KiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 512.0KiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 1.00MiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 2.00MiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 4.00MiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 8.00MiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 16.00MiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 32.00MiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 64.00MiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 128.00MiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 256.00MiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 512.00MiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 1.00GiB
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 2.00GiB
>>> print sess.run(hello)
Hello, TensorFlow!
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> print sess.run(a+b)
42
>>> 


 Conclusion

At first sight, the issue of the use of cuda 7.5 with TensorFlow is resolved with TensorFlow 0.7.

Wednesday, February 10, 2016

Correction of uneven illumination of images of cattle nuclei at low magnification


Building opencv 3.1 under Manjaro linux

As previously, opencv was build using cmake-gui to set-up the build options. CUDA 7.5 was installed from manjaro files repository with octopi files manager.

Everything went fine during compilation with: make -j4 followed by sudo make install.

However when importing opencv from a python console, things went wrong:

$ python2
Python 2.7.11 (default, Dec  6 2015, 15:43:46)
[GCC 5.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named cv2


I didn't met this problem under Ubuntu but the problem is known. Specifying the path to cv2 from python allows the lib to be loaded:
 
>>> import sys

>>>sys.path.append('/usr/local/lib/python2.7/site-packages')
>>> import cv2
>>> cv2.__version__
'3.1.0'


Thursday, November 5, 2015

Prune iteratively the leaves of a toy graph with graph-tool

A measure of the similarity of two weighted multigraphs?

To compare shapes, typically obtained after image segmentation, it has been shown that their skeleton could be used. A search with the keywords : skeleton+pruning+image+recognition yields a lot of results.
Assuming that the size of the branches of the skeleton, linking the end-points and the branched points together, can be represented by the weights of the edges in a graph, a weighted graph can represents naturally the skeleton of a shape.
From the skeleton of the B character (left) to its associated (networkx) graph (right).
Previously, pruning of skeletons was performed by removing pixels, one after an other, from the end-points by morphological hit-or-miss. A response curve of a skeleton to pruning, can be defined by counting the end-points of the skeleton as pruning is iterated. It was observed that the responses from similar shapes were similar. So, such response curves may be used to compute some similarity measures, or distances, between shapes.

The time necessary to prune a skeleton depends on its number of pixels. So it may be worth  prior pruning to convert a skeleton into a graph. Thus pruning a graph could be independent from the weight of its edges.

The response of a a weighted multi-graph to iterative pruning was performed with graph-tool (and additionnal python librairies : mahotas, numpy, scipy, matplotlib). This response yields a curve: a "Response to Pruning Curve" or RPC. Then a distance was defined between two RPCs, to compute a similarity index between two graphs.

The graph tool version used is:

2.11 (commit b6b2152c, Mon Oct 26 10:31:30 2015 +0100)


This post summarises the ideas developed in a jupyter notebook.

Algorithm for pruning a graph from its leaves :

  1. Make a weighted un-directional multi-graph
  2. (Visualize it)
  3. Start pruning:
    1. Count the vertices on degree 1
    2. Read the weight of the edges bound to vertices of degree1 (V1)
    3. Get the edge(s) of minimal weight
    4. Remove this/these edge(s)
    5. Decrease the weight of the other edges bound to vertices v1 by this minimal weight
  4.  Do it until no more vertices of degree 1 remains
  5. Cumulate the weight of removed edges
  6. The response to pruning of a graph, associates the number of degree1 vertices to the cumulative weight of the removed edges.

Make a toy-graph to play with:

By default in graph-tool, the graphs are multi-graphs and that's what we need. The edges of the graph must be weighted, this property has to be created. The weight of an edge corresponds to the size of a branch in a skeleton (the number of pixels). The following python code, shows the creation of a graph with an edge property called 'weight': 

 


The graph can be displayed such that the thickness of its edges maps to the weight of edges:

Start to prune the graph:

Two different versions of the pruning function were written. The second called prune_once_faster() depends on the GraphView() method of the graph-tool library to filter vertices of degree 1, hopping to make a faster function:


The pruning function is destructive, applying it iteratively on the same graph yields, if you write in a jupyter cell:

G0 = make_toy_graph()
g0 = G0.copy()
prune_graph_once(g0)
g1 = g0.copy()
prune_graph_once(g0)
g2 = g0.copy()
prune_graph_once(g0)
g3 = g0.copy()
print G0, number_of_degree1_vertices(G0)
print g1, number_of_degree1_vertices(g1)
print g2, number_of_degree1_vertices(g2)
print g3, number_of_degree1_vertices(g3)
 You will have as output:
<Graph object, undirected, with 4 vertices and 4 edges at 0x9c65c1ac> 2
<Graph object, undirected, with 4 vertices and 3 edges at 0x9c65cc2c> 1
<Graph object, undirected, with 4 vertices and 2 edges at 0x9c654dec> 1
<Graph object, undirected, with 4 vertices and 1 edge at 0x9c65424c> 0
 Indicating that the number of edges decreases as the number of vertices of degree 1, each time the graph is pruned.

Pruning iteratively a graph: 

Let's count the number of vertices of degree 1 as the graph is pruned. The toy graph used here to check its response, has two vertices of degree 3 and two vertices of degree 1 (vertices 2 an 3). The initial values of the weight of the edges bound to vertices 2 and 3, are :

15, 20

The smallest weight is 15 So after one pruning, the weights becomes:

15-15 = 0, 20-15 = 5

The edge 1-2 of weight 0 is removed (vertex 2 is now of degree 0). 
The edge 1-3 has the smallest weight (5). 
So after a second round of pruning, the edge 1-3 is removed.
The vertex 1 was of degree 2, now it is of 1. The weight of the edge 0-1 is 8.

A third pruning removes the last edge 0-1 of weight equal to 8.

The degree of vertex 0 is 2 (there's a loop), so the edge 0-0 can't be pruned. The different weights are cumulated as follow:

0, 15, 15+5, 20+8

Applied to a skeleton made of pixels, this would mean that:
  • 15 prunings of 1 pixel would remove one branche of size 15.
  • 20 prunings would remove the two branches of size 15 and 20.
  • 28 prunings would remove the three branches of sizes 15, 20 and 8 (initially between two vertices of degree 3).

The weights are stored in a vector X .
The corresponding counts of degree 1 vertices are stored in a vector Y.
Both X,Y are returned by the function response_to_iterative_pruning()

Make a feature vector from the pruning curve?

A response curve was plotted as a step curve:


The upper curve is the response when the weight of edges are used. In the lower one, the weights were normalised by the total weights of the graph (% total length or total weight), hopping to produce a scale invariant response. For the following, only the raw response will be used.

Distance between two graphs:

First, we need new toy graphs to compare them. The second new graph differe from the first only by its weight:

The third one has no loop:
 A fourth has a loop elsewhere:

The response of the three first graphs are:

Overlayed responses of first and third graph (right)
We could try to compute the area between two step curves. For two identical curves the area would be null, as the curves differe the area may increase. Fine!
However, the curves are just defined from discrete data, it may not be so easy to define for such kind of integration.

Let's forget the curves and just look at the points from the scatter plot. When looking at two pruning curves obtained from toy-graphs 2 and 3, we could just compute the distances matrix between the points from the two curves. The more the curves are different, the more the distances should increase ... why not, give it a try:


scipy provides just what we need!

The response curve is given as (X, Y) with
  • X : The pruning values 
  • Y : the corresponding number of leaf in the graph (vertices of degree 1)
First, let's compute the distances between the points of the same curve:

T=make_toy_graph()
X, Y = response_to_iterative_pruning(T.copy())
pruning_curve = zip(X,Y)
D = cdist( pruning_curve, pruning_curve)
print D

Thanks to scipy, we get a 4x4 (since the graph has four vertices) matrix of distances D :

[[  0.          15.03329638  20.02498439  28.0713377 ]
 [ 15.03329638   0.           5.          13.03840481]
 [ 20.02498439   5.           0.           8.06225775]
 [ 28.0713377   13.03840481   8.06225775   0.        ]]
Which can be displayed as an image (the matrix is symetric, only upper element are displayed):

For each of the three first graphs T,T1,T2 , one can define 3232+3=6   matrices of distances.
Those matrices are elements of another matrix (a matrix of matrices). The elements of the diagonal are squared matrices corresponding to "intra-distances" (distances between the points of the same response curve).
The elements outside the diagonal correspond to "inter-distances" matrices (distances computed from points belonging to two different response curves), the "inter-distance matrices", here, can be  rectangular if the corresponding pairs of graphs do not have the same number of vertices (and square if the two graphs have the same number of vertices):
 For each matrix, the total distance can be computed as the sum of their elements. This yields:
[[ 178.  187.  256.]
 [   0.  105.  135.]
 [   0.    0.  140.]]

At first sight, this is not good, elements outside the diagonal (total distance between two response curves, that is the distance between two graphs)  should be greater than those of the diagonal.

Defining a normalized distance between two graphs

We try to heal this situation by normalising the inter-distances . The total inter-distances is normalized by the sum of total intra-distances. The total inter distances is multiplied by two in order to have a normalised distance:
 nD .  

nD(responsecurvei,responsecurvei)=1

When computed against the pair of same response curve, the normalized distance equals 1 as wished.

Finally nD  is computed as follow:

nD(ri,rj)=12distance(ri,rj)ri+rj=ri+rj2distance(ri,rj)
such that:
nD(ri,rj)<=1

Let's see the matrix of normalised distance on the three first graphs:

[[ 1.          0.75751403  0.62186779]
 [ 0.          1.          0.90636131]
 [ 0.          0.          1.        ]]
 
 
As an image:

This seems better. With that distance, the similarity of a graph on itself is 1 and two different graphs have a similarity bellow 1. It remains to see if that distance can discriminate between graphs of interest possibly after supervised classification.