高精度空間轉(zhuǎn)錄組分析之squidpy和空間網(wǎng)絡(luò)圖

作者链峭,Evil Genius

本來兒童節(jié)的推文不打算寫了,但是有粉絲問了個問題谐算,我也發(fā)現(xiàn)了一個很有意思的分析熟尉,所以寫出來和大家交流一下吧。

有粉絲問洲脂,像生信培訓(xùn)班有必要花幾千一萬的上課么斤儿?

我只能說,看你自己的情況恐锦。一般這么問的往果,都是學(xué)生,醫(yī)學(xué)生居多一铅。

但是培訓(xùn)班質(zhì)量怎么樣陕贮,就不是我能決定的了。

而且每個人的看法不同潘飘,如果在上市公司工作肮之,比如蘭衛(wèi)醫(yī)學(xué),那么分析人員接觸到的都是大項目組卜录,單細(xì)胞 + 空間 + 原位分析戈擒,而且是相互合作,追求分析質(zhì)量艰毒,發(fā)高分文章的話筐高,那么培訓(xùn)班的那些東西完全沒有任何價值。但是如果是入門,那還是有點(diǎn)價值的凯傲。

好了犬辰,開始我們今天的網(wǎng)絡(luò)圖嗦篱,首先我們來重新認(rèn)識一下squidpy冰单,因?yàn)樽鯴enium項目的關(guān)系,這個軟件相當(dāng)重要灸促,而且也更新了很多诫欠,對image-based的空間轉(zhuǎn)錄組相當(dāng)有用,網(wǎng)址在https://squidpy.readthedocs.io/浴栽。文章在

大家可以看一下分析列表

針對空間平臺有全套的分析內(nèi)容荒叼,包括

相當(dāng)齊全,還有多種細(xì)胞分割算法典鸡,是一個很好的寶藏

我們重點(diǎn)看一下Xenium的分析被廓,當(dāng)然了,其他的高精度平臺分析內(nèi)容也都差不多萝玷,主要就是降維聚類和鄰域富集分析


接下來重點(diǎn)了嫁乘,squidpy更新了一個分析內(nèi)容

有沒有很眼熟?就是我空轉(zhuǎn)系列課程第16課講的cell degree球碉,現(xiàn)在squidpy一個函數(shù)搞定蜓斧。簡單復(fù)述一下代碼

import anndata as ad
import scanpy as sc
import squidpy as sq

adata = sq.datasets.visium_hne_adata()

sq.pl.spatial_scatter(adata, color=["Sox8", "cluster"])
sq.pl.spatial_scatter(
    adata,
    color=["Sox8", "cluster"],
    crop_coord=[(1500, 1500, 3000, 3000)],
    scalebar_dx=3.0,
    scalebar_kwargs={"scale_loc": "bottom", "location": "lower right"},
)
sq.gr.spatial_neighbors(adata)
adata2 = sc.pp.subsample(adata, fraction=0.5, copy=True)
adata2.uns["spatial"] = {}
adata2.uns["spatial"]["V2_Adult_Mouse_Brain"] = adata.uns["spatial"][
    "V1_Adult_Mouse_Brain"
]
adata_concat = ad.concat(
    {"V1_Adult_Mouse_Brain": adata, "V2_Adult_Mouse_Brain": adata2},
    label="library_id",
    uns_merge="unique",
    pairwise=True,
)
sq.pl.spatial_scatter(
    adata_concat,
    color=["Sox8", "cluster"],
    library_key="library_id",
    connectivity_key="spatial_connectivities",
    edges_width=2,
    crop_coord=[(1500, 1500, 3000, 3000), (1500, 1500, 3000, 3000)],
)
sq.pl.spatial_scatter(
    adata_concat,
    color=["Sox8", "cluster"],
    library_key="library_id",
    library_first=False,
    connectivity_key="spatial_connectivities",
    edges_width=2,
    crop_coord=[(1500, 1500, 3000, 3000), (1500, 1500, 3000, 3000)],
    outline=True,
    outline_width=[0.05, 0.05],
    size=[1, 0.5],
    title=[
        "sox8_first_library",
        "sox8_second_library",
        "cluster_first_library",
        "cluster_second_library",
    ],
)
sq.pl.spatial_scatter(
    adata_concat,
    shape=None,
    color=["Sox8", "cluster"],
    library_key="library_id",
    library_first=False,
    connectivity_key="spatial_connectivities",
    edges_width=2,
    crop_coord=[(1500, 1500, 3000, 3000), (1500, 1500, 3000, 3000)],
    outline=True,
    outline_width=[0.05, 0.05],
    size=[1, 0.5],
    title=[
        "sox8_first_library",
        "sox8_second_library",
        "cluster_first_library",
        "cluster_second_library",
    ],
)

相當(dāng)好的方法啊

接下來來到我們的壓軸大戲,如下圖:

需要借助squidpy的力量睁冬,但是僅憑squidpy畫圖很很丑挎春,如下:

這遠(yuǎn)遠(yuǎn)達(dá)不到我們的要求
我們來實(shí)現(xiàn)一下,代碼如下,來一個一氣呵成的腳本豆拨,供大家研究吧:
import scanpy as sc
import squidpy as sq
import pandas as pd
import matplotlib.pyplot as plt
from mpl_chord_diagram import chord_diagram
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.anchored_artists import AnchoredSizeBar
import matplotlib.font_manager as fm
from matplotlib import patches as mpatches
import scanpy as sc
import matplotlib.pyplot as plt
import matplotlib as mpl

import seaborn as sns
import matplotlib as mpl
import numpy as np
from mpl_toolkits.axes_grid1.anchored_artists import AnchoredSizeBar
from scipy.stats import f_oneway
from statsmodels.stats.multicomp import pairwise_tukeyhsd

def plot_segmentation_polygon(adata,
                              polygon_file,
                              region_subset,
                              cell_subset,
                              color,
                              annotation,
                              xlim_max,
                              ylim_max,
                              save,
                              save_path,
                              linewidth = 0.1,
                              color_is_gene_exp =False,
                              cell_id_col = 'CellID',
                             ):
    
    import ast
    cmap = mpl.cm.RdBu_r
    normalize = mpl.colors.Normalize(vmin=-2, vmax=2)
    polygons = pd.read_csv(polygon_file)
    polygons = polygons.set_index('0')
    
    adata.obs['CellID'] = adata.obs['CellID'].astype(int)
    if len(region_subset) == 4: 
        adata = adata[(adata.obs.x>region_subset[0]) & (adata.obs.x<region_subset[1]) & (adata.obs.y>region_subset[2]) & (adata.obs.y<region_subset[3])] 
        
    subset = adata[adata.obs.CellID.isin(pd.Series(polygons.index).astype(int).to_list())]
    
    if len(cell_subset) != 0:
        subset_filt = subset[subset.obs[annotation].isin(cell_subset) ] 
        subset_removed = subset[~subset.obs[annotation].isin(cell_subset) ] 
        subset_removed.obs[color] = '#ffffff'
        subset = sc.concat([subset_filt, subset_removed])

    color_dict =  dict(zip(subset.obs['CellID'],subset.obs[color]))
    fig=plt.figure(figsize=(4,4),dpi=500)
    
    ax = fig.add_subplot(1, 1, 1)
    with plt.rc_context({'figure.figsize': (10, 10), 'figure.dpi':500}):
        new_list = []
        for i in polygons.index:
            try: 
                try:
                    if  color_is_gene_exp:
                        if color_dict[i] == '#ffffff':
                            color_ = '#ffffff'
                        else: 
                            color_ =  cmap(normalize(color_dict[i]))
                        plt.fill(ast.literal_eval(polygons[polygons.index == (i)]['x_exterior'][i]),
                             ast.literal_eval(polygons[polygons.index == (i)]['y_exterior'][i]), 
                             facecolor = color_, edgecolor='black',linewidth=linewidth) #,facecolor=array_sub.iloc[0,3] ,
                    else:
                        plt.fill(ast.literal_eval(polygons[polygons.index == (i)]['x_exterior'][i]),
                                 ast.literal_eval(polygons[polygons.index == (i)]['y_exterior'][i]), 
                                 facecolor = color_dict[i], edgecolor='black',linewidth=linewidth) #,facecolor=array_sub.iloc[0,3] ,
                    new_list.append(i)
                except KeyError:
                    continue
            except AttributeError:
                continue
       # if len(region_subset) == 4: 
       #     roi = Rectangle((region_subset[2],region_subset[0]),region_subset[3]-region_subset[2],region_subset[1]-region_subset[0],linewidth=1,edgecolor='black',facecolor='none')
       #     ax.add_patch(roi);
        plt.axis('scaled')
        plt.xticks(np.arange(0, xlim_max+1, 2000))
        plt.yticks(np.arange(0, ylim_max+1, 2000))


        
        if save: 
            plt.rcParams['pdf.fonttype'] = 42
            plt.rcParams['ps.fonttype'] = 42
            plt.rcParams['svg.fonttype'] = 'none'
            plt.savefig(save_path, bbox_inches="tight")
        
        plt.show()

def hex_to_rgb(hex):
    hex=hex[1:]
    return tuple(int(hex[i:i+2], 16) for i in (0, 2, 4))
def plot_segmentation_mask_colored(ad_sp,
                                    coo_file,
                                    color_column,
                                   positions,
                                   output_file,
                                  ):

    # import packages

    import scanpy as sc
    import pandas as pd
    from scipy.sparse import load_npz
    from scipy.sparse import coo_matrix
    import skimage
    from skimage.color import label2rgb
    import numpy as np
    import matplotlib.pyplot as plt
    import matplotlib as mpl
    plt.rcdefaults()
    # load data
    coo_file = coo_file
    coo = load_npz(coo_file)
    array = coo.toarray()

    # subset image
    image_subset=array[positions[0]:positions[1],positions[2]:positions[3]]
    rgb_label_image = label2rgb(image_subset, bg_label=0, bg_color = None, kind = 'overlay')

    #Cell_Num = ad_sp.obs.index.astype(int)+1 #pd.DataFrame(ad_sp.obs.index)['CellID'].str.split('_',expand = True)[3].astype(int)
    #ad_sp.obs['CellID'] = list(Cell_Num)
    ad_sp.obs['col_rgb'] = [hex_to_rgb(item) for item in ad_sp.obs[color_column]]
    # subset anndata object
    ad_sp_int = ad_sp[ad_sp.obs['CellID'].astype(int).isin(image_subset.flatten())]

    # color image
    
    filtered_cells = dict(zip(ad_sp_int.obs['CellID'].astype(int), ad_sp_int.obs['col_rgb']))
    values = (np.unique(image_subset.flatten()))

    colors_empty = np.zeros((values.max()+1, 3)).astype(int)
    for i in filtered_cells:
        colors_empty[i] = np.array(filtered_cells[i])

    colored_image = colors_empty[image_subset]
    with plt.rc_context({'figure.figsize': (20, 20)}):
        plt.imshow(colored_image)
        #lt.gca().invert_yaxis()
        plt.axis('off')
        mpl.rcParams['pdf.fonttype'] = 42
        mpl.rcParams['ps.fonttype'] = 42
        plt.rcParams['svg.fonttype'] = 'none'
        plt.savefig(output_file, dpi = 600)
        plt.show()
        
import matplotlib.pyplot as plt


def get_interaction_summary(df):
    import numpy as np

    # Create a symmetric matrix by summing with its transpose
    M = np.triu(df) + np.triu(df, 1).T

    # Create a dictionary to store the summaries
    summary_dict = {}

    # Iterate over the upper triangle of the matrix
    for i in range(len(M)):
        for j in range(i+1, len(M)):
            # Check if the same interacting entities have been seen before
            if (i,j) in summary_dict:
                continue
            elif (j,i) in summary_dict:
                continue
            else:
                # If not, find all other interactions with the same entities and add up their values
                interacting_entities = [(i,j)]
                for k in range(j+1, len(M)):
                    if M[i][k] == M[j][k]:
                        interacting_entities.append((i,k))
                        interacting_entities.append((j,k))
                # Store the summary in the dictionary
                summary_dict[(i,j)] = sum(M[x][y] for x,y in interacting_entities)

    inter1 = []
    inter2 = []
    value = []
    # Print the summaries
    for k, v in summary_dict.items():
        inter1.append(k[0])
        inter2.append(k[1])
        value.append(v)
    df_interaction = pd.DataFrame(inter1)
    df_interaction['int2'] = pd.DataFrame(inter2)
    df_interaction['value'] = pd.DataFrame(value)
    mapp = dict(zip(range(len(df)),df.index))
    df_interaction[0] = df_interaction[0].map(mapp)
    df_interaction['int2'] = df_interaction['int2'].map(mapp)
    return(df_interaction)

adata = sc.read('test.h5ad')

adata.obsm["spatial"] = adata.obs[["x", "y"]].copy().to_numpy()
adata = adata[adata.obs['Annotation 3.5'] != 'Unknown']
mapping = {'Astro': 'Astro',
 'CAM': 'CAM',
 'DA-Astro': 'DA-Astr',
 'DA-MOL2': 'DA-MOL2',
 'DA-MOL5/6': 'DA-MOL5/6',
 'DA-MiGL': 'DA-MiGL',
 'DA-OPC/COP': 'DA-OPC/COP',
 'DC': 'DC',
 'Ep': 'Ep',
 'Ep-Neu': 'Ep',
 'MC/d': 'MC/d',
 'MOL2': 'MOL2',
 'MOL5/6': 'MOL5/6',
 'MiGL': 'MiGL',
 'NFOL/MFOL': 'NFOL/MFOL',
 'Neu-Chol': 'Neuron',
 'Neu-Ex-Glu': 'Neuron',
 'Neu-In-GABA': 'Neuron',
 'Neu-In-Gly': 'Neuron',
 'OPC/COP': 'OPC/COP',
 'Per': 'Vascular',
 'Schw': 'Schw',
 'T-cell': 'T-cell',
 'VEC': 'Vascular',
 'VEC-Per': 'Vascular',
 'VLMC': 'Vascular'}
adata.obs['interaction_annotations'] = adata.obs['Annotation 3.5'].map(mapping)

color_interaction_anno = {'MOL2': '#7c4dffff',
 
 'MOL5/6': '#311b92ff',
 'DA-MOL5/6': '#58104dff',
'DA-MOL2': '#941b81ff',
'DA-Astr': '#25cc44ff',
 'DA-MiGL': '#ff9300ff',
 'MiGL': '#ffdb00ff',
 'Astro': '#009051ff',
 
                           'DA-MiGL': '#ff9300ff',

'MC/d': '#ff009dff',

 'DC': '#b8860bff',
'T-cell': '#f8ff00ff'
 'Neuron': '#133c55ff',
 'NFOL/MFOL': '#7d1b94ff',
 'Vascular': '#ff0000ff',
 'Schw': '#929000ff',
 'CAM': '#ff00f5ff',
 'OPC/COP': '#9e8ac0',
 'DA-OPC/COP': '#ff87bbff',
 'Ep': '#ffc0cbff',
}
adata.obs['interaction_annotations_colors'] = adata.obs['interaction_annotations'].map(color_interaction_anno)
sc.pl.umap(adata, color = 'interaction_annotations', palette = color_interaction_anno)
sq.gr.spatial_neighbors(adata, 
                        library_key = 'sample_id', 
                    coord_type="generic", 
                        delaunay=False, 
                        #radius = radius,
                       n_neighs=5)
ad_list = []
for sample in adata.obs.sample_id.unique(): 
    ad_sp = adata[adata.obs.sample_id == sample]
    if '_B_' in sample:
        radius = 300
        size = 20 
    else:
        radius = 180
        size = 60 


    sq.pl.spatial_scatter(ad_sp, 
                          color = 'interaction_annotations',
                          #coords=adata.obsm['spatial'],
                         # crop_coord=region_subset_dict[sample],
                          size= size,shape=None,
                          figsize=(20, 20), 
                          connectivity_key = 'spatial_connectivities', 
                         )#save = sample+'.svg')
    plt.show()
    ad_list.append(ad_sp)
region_subset_dict  = {
    #'R03_L_EAE_EARLY': [ 7750, 7750+2000,1750, 1750+2000],
    'R03_L_EAE_EARLY': [ 2500, 4500,10000, 12000],
    #'R01_B_EAE_PEAK': [23500,25500,24000,26000],   
    #'R02_T_EAE_PEAK':[4000,6000,2500,4500],
    #'R10_C_EAE_LATE': [13900,15900,12000,14000],
     'R02_L_EAE_PEAK': [7500, 9500, 7750, 9750],
    'R05_L_EAE_LATE': [11000,13000,9500,11500]
                }
for sample in region_subset_dict.keys():#['R2_L_EAE_PEAK','R5_L_EAE_LATE','R1_B_EAE_PEAK']: 
    ad_sp = adata[adata.obs.sample_id == sample]
    
    region_subset = region_subset_dict[sample]


    if '_B_' in sample:
        xlim_max = adata[adata.obs.batch == 'brain'].obs.x.max()
        ylim_max = adata[adata.obs.batch == 'brain'].obs.y.max()
        polygon_file = '/home/christoffer/eae/eae_care/'+'R1_B_BOTH_PEAK'+'_expanded_polygons.csv'
    else:
        xlim_max = adata[adata.obs.batch == 'spinal cord'].obs.x.max()
        ylim_max = adata[adata.obs.batch == 'spinal cord'].obs.y.max()
        polygon_file = '/home/christoffer/eae/eae_care/'+sample+'_expanded_polygons.csv'


    
    
    ad_sp = ad_sp[(ad_sp.obs.x>region_subset[0]) & (ad_sp.obs.x<region_subset[1]) & (ad_sp.obs.y>region_subset[2]) & (ad_sp.obs.y<region_subset[3])] 


    if '_B_' in sample:
        radius = 300
        size = 300
    else:
        radius = 180
        size = 500 

    fig = sq.pl.spatial_scatter(ad_sp, 
                          color = 'interaction_annotations',
                         #crop_coord=[(2550,2550,11000,11500)], # (left, right, top, bottom)
                          size= size,
                          shape=None,
                          figsize=(10, 10), 
                          connectivity_key = 'spatial_connectivities',
                         #save = sample+'.svg',
                     scalebar_dx = 0.1625,
                          scalebar_units = 'um',
                          return_ax = True
                         )
    plt.rcParams['pdf.fonttype'] = 42
    plt.rcParams['ps.fonttype'] = 42
    plt.rcParams['svg.fonttype'] = 'none'
    plt.savefig(sample+'.svg', bbox_inches="tight")
    plt.show()
    fig = sq.pl.spatial_scatter(ad_sp, 
                          color = 'compartment',
                         #crop_coord=[(2550,2550,11000,11500)], # (left, right, top, bottom)
                          size= size,
                          shape=None,
                          figsize=(10, 10), 
                          connectivity_key = 'spatial_connectivities',
                         #save = sample+'.svg',
                     scalebar_dx = 0.1625,
                          scalebar_units = 'um',
                          return_ax = True
                         )
    plt.show()
for sample in fig_3_zoom_in.keys():#['R1_B_EAE_PEAK','R2_L_EAE_PEAK','R5_L_EAE_LATE']: 
    ad_sp = adata[adata.obs.sample_id == sample]
    region_subset = fig_3_zoom_in[sample]
    ad_sp1 = ad_sp[(ad_sp.obs.x>region_subset[0]) & (ad_sp.obs.x<region_subset[1]) & (ad_sp.obs.y>region_subset[2]) & (ad_sp.obs.y<region_subset[3])] 


    fig, ax = plt.subplots()
    ax.scatter(ad_sp.obs[['x','y']].x,ad_sp.obs[['x','y']].y,s= 1)
    ax.scatter(ad_sp1.obs[['x','y']].x,ad_sp1.obs[['x','y']].y,s= 1)
    ax.invert_yaxis()
    plt.show()

sc.pl.umap(adata, color = 'interaction_annotations', palette = color_interaction_anno)
compartment_dictionary = {
'CNTRL':['WM','GM','Corpus callosum'],
'EAE':['WM',
    'GM',
    'PL',
    'LR',
    'LC',
    'LL',
     'Corpus callosum',

     'DA-region',]
}
adata.obs['interaction_grouping'] = adata.obs.type.astype(str) + '_' + adata.obs.batch.astype(str) + '_' +adata.obs.compartment.astype(str) 
interaction_grouping = ['CNTRL_spinal cord_WM', 'CNTRL_spinal cord_GM',
 'EAE_spinal cord_WM','EAE_spinal cord_GM',
 
 

 'EAE_spinal cord_PL', 'EAE_spinal cord_LR', 'EAE_spinal cord_LC', 'EAE_spinal cord_LL', 'EAE_brain_DA-region',
 
 'EAE_brain_Corpus callosum']
level_ = 'interaction_annotations'
region_subset_dict = {'R1_L_EAE_PEAK': [(9500,11000,16000,18000)],
                        #'R1_T_EAE_PEAK': [(7500,9000,3000,5000)],
                       # 'R3_L_EAE_PEAK': [(8000,9000,15000,18000)],
                       # 'R3_T_EAE_PEAK':[(10000,12000,12000,13000)],      
                       # 'R4_C_EAE_PEAK':[(14700,16000,17000,18000)],
                       # 'R4_L_EAE_PEAK':[14000,16000,8000,10000],
                      'R6_L_EAE_LATE':[(3000,5000,16000,18000)],
                        # y_start, y_end, x_start, x_end

            
}
ad_sp.obs[['x','y']]
region_subset_dict = {'R1_B_EAE_PEAK':[23000, 25000, 25000, 27000],
    'R1_C_EAE_PEAK':[16000, 18000, 11000, 13000],
                      'R2_T_EAE_PEAK':[7500, 9500, 800, 2800],
                      'R2_L_EAE_PEAK':[12000, 14000, 3500, 5500],
                      'R5_L_EAE_LATE':[12000, 14000, 10000, 12000],
                     }
adata_SC = adata[adata.obs.region.isin(['L'])]
adata_B = adata[adata.obs.batch == 'brain']
adata_SC_peak = adata_SC[adata_SC.obs.time == 'PEAK']
adata_SC_peak = adata_SC[adata_SC.obs.time == 'PEAK']
import os
path='/media/christoffer/DATA 2/CML/eae/FIGURES/fig5_updated/'
try:
    os.listdir(path)
except:
    os.mkdir(path)
    
def get_range(list_int):
    import numpy as np

    # Define your list of floats
    data = list_int

    # Determine the range of values in the data set
    data_range = max(data) - min(data)

    # Calculate the bin size based on the desired number of bins
    num_bins = 5
    bin_size = data_range / num_bins

    # Use the histogram function to get the frequency counts and bin edges
    counts, edges = np.histogram(data, bins=num_bins)

    # Create a list of representative integers based on the bin edges
    integers = [int(round(edge)) for edge in edges]

    # Print the results
    print(f"Counts: {counts}")
    print(f"Bin Edges: {integers}") 
    return integers
from scipy import stats
from matplotlib.lines import Line2D
adata.obs['interaction_annotations'] = adata.obs['interaction_annotations'].astype('category')
adata_int.uns[level_+"_nhood_enrichment"]["zscore"]
level_ = 'interaction_annotations'

ad_list_2 = {}
for int_group in ['EAE']:
    print(int_group)
    adata_int2 = adata_SC[adata_SC.obs['type'] == int_group]
    for time in adata_int2.obs.time.unique():
        print(time)
        adata_int = adata_int2[adata_int2.obs['time'] == time]

        sq.gr.nhood_enrichment(adata_int, cluster_key=level_)
        sq.pl.nhood_enrichment(adata_int, cluster_key=level_)
        adata_int.uns[level_+'_nhood_enrichment']['zscore'] = np.nan_to_num(adata_int.uns[level_+'_nhood_enrichment']['zscore'])
        colors =pd.DataFrame(dict(zip(adata.obs['interaction_annotations'].cat.categories,adata.uns['interaction_annotations_colors'])).values())#pd.DataFrame(adata_int.uns[level_+'_colors'])
        for_eneritz = pd.DataFrame(adata_int.uns[level_+"_nhood_enrichment"]["zscore"])
        for_eneritz.index = adata_int.obs['interaction_annotations'].cat.categories
        for_eneritz.columns = adata_int.obs['interaction_annotations'].cat.categories
        #for_eneritz.to_csv(path+'SC_'+int_group+'_'+time+'_interaction_matrix.csv')
        size = pd.DataFrame(adata_int.obs[level_].value_counts())
        print(size)
        # create network plot
        import networkx as nx
        import matplotlib.pyplot as plt

        G = nx.Graph()
        nodes = adata_int.obs[level_].cat.categories
        categories = pd.DataFrame(adata_int.obs[level_].cat.categories)
        colors['cells'] = categories

        nodes2 = []
        for i,node in enumerate(((nodes))):
            for j in range(i+1, len(nodes)):
                zscore = adata_int.uns[level_+"_nhood_enrichment"]["zscore"][i, j]
                pval = stats.norm.sf(abs(zscore))*2
                if zscore>1:
                    G.add_edge(nodes[i], nodes[j], weight=(zscore))

        pos = nx.spring_layout(G, k=0.5, seed=42)
        size = size[size.index.isin(pos.keys())]
        size = size.sort_index()
        colors = colors[colors.cells.isin(pos.keys())]
        colors = dict(zip(colors['cells'], colors[0]))

        edge_widths = [d['weight'] for u, v, d in G.edges(data=True)]

        size = dict(zip(size.index, size['interaction_annotations']))
        node_size = [size[node] for node in G.nodes()]
        node_colors = [colors[node] for node in G.nodes()]

        plt.figure(figsize=(10, 10))
        sc = nx.draw_networkx_nodes(G, pos, node_color=node_colors, alpha=0.5, node_size=np.array(node_size)/2)
        nx.draw_networkx_edges(G, pos, edge_color="black", alpha=0.5, width=0.25*(np.array(edge_widths)/5))
        nx.draw_networkx_labels(G, pos, font_size=15, font_color="black")
        plt.axis("off")
        legend1 = plt.legend(*sc.legend_elements("sizes", num=6),  
                   bbox_to_anchor=(1, 1), 
                   prop={'size': 70},
                   title = '# cells in cluster',
                  frameon = False)


        lines = []
        edges_weight_list = sorted(np.array(edge_widths))
        integers = get_range(edges_weight_list)
        for i, width in enumerate(integers):
            lines.append(Line2D([],[], linewidth=0.25*(width/5), color='black'))

        legend2 = plt.legend(lines,integers,prop={'size': 20}, bbox_to_anchor=(0, 0.5), frameon=False, ) 

        plt.gca().add_artist(legend1)
        plt.gca().add_artist(legend2)


        plt.rcParams['pdf.fonttype'] = 42
        plt.rcParams['ps.fonttype'] = 42
        plt.rcParams['svg.fonttype'] = 'none'
        plt.savefig(path+'SC_'+int_group+'_'+time+'_nhood_enrichment.svg', bbox_inches="tight", dpi = 500)
        plt.show()

        sq.gr.interaction_matrix(adata_int, cluster_key=level_, normalized = False)
        #sq.pl.interaction_matrix(adata_int, cluster_key=level_,)# vmax = 5000, method="ward",)

        df = pd.DataFrame(adata_int.uns[level_+'_interactions'])
        df_filt = df#[df.sum() > df.sum().quantile(0.6)]
        df_filt = df_filt.T
        df_filt = df_filt[df_filt.index.isin(df_filt.columns)]
        colors =pd.DataFrame(adata_int.uns[level_+'_colors'])
        colors = colors[colors.index.isin(df_filt.columns)][0]
        categories = pd.DataFrame(adata_int.obs[level_].cat.categories)
        categories = categories[categories.index.isin(df_filt.columns)][0]
        df_filt.index = categories
        df_filt.columns = categories
        import random
        randomlist = []
        for i in range(0,19):
            n = random.uniform(0,1,)
            randomlist.append(n)
        #df.index= adata_int.obs.level3.cat.categories
        #df.columns= adata_int.obs.level3.cat.categories
        with plt.rc_context({'figure.figsize': (10, 10), 'figure.dpi':100}):
            chord_diagram(df_filt, names = list(categories), 
                          rotate_names = True, fontcolor = 'black',
                          fontsize=10,colors = list(colors), alpha = 0.90,
                         sort = 'distance', use_gradient= True, show= False)
            plt.rcParams['pdf.fonttype'] = 42
            plt.rcParams['ps.fonttype'] = 42
            plt.rcParams['svg.fonttype'] = 'none'
            plt.savefig(path+'SC_'+int_group+'_'+time+'_interaction_matrix.svg', bbox_inches="tight")
        plt.show()
ad_list_2 = {}
for int_group in ['EAE']:#,#'CNTRL']:
    print(int_group)
    adata_int2 = adata_B[adata_B.obs['type'] == int_group]
    for region in adata_int2.obs.region.unique():
        print(region)
        adata_int = adata_int2[adata_int2.obs['region'] == region]
        sq.gr.nhood_enrichment(adata_int, cluster_key=level_)
        sq.pl.nhood_enrichment(adata_int, cluster_key=level_)
        adata_int.uns[level_+'_nhood_enrichment']['zscore'] = np.nan_to_num(adata_int.uns[level_+'_nhood_enrichment']['zscore'])
        colors =pd.DataFrame(dict(zip(adata.obs['interaction_annotations'].cat.categories,adata.uns[level_+'_colors'])).values())#pd.DataFrame(adata_int.uns[level_+'_colors'])

        for_eneritz = pd.DataFrame(adata_int.uns[level_+"_nhood_enrichment"]["zscore"])
        for_eneritz.index = adata_int.obs['interaction_annotations'].cat.categories
        for_eneritz.columns = adata_int.obs['interaction_annotations'].cat.categories
        for_eneritz.to_csv(path+'B_'+int_group+'_'+time+'_interaction_matrix.csv')
        size = pd.DataFrame(adata_int.obs[level_].value_counts())
        print(size)
        # create network plot
        import networkx as nx
        import matplotlib.pyplot as plt

        G = nx.Graph()
        nodes = adata_int.obs[level_].cat.categories
        categories = pd.DataFrame(adata_int.obs[level_].cat.categories)
        colors['cells'] = categories

        nodes2 = []
        for i,node in enumerate(((nodes))):
            for j in range(i+1, len(nodes)):
                pval = adata_int.uns[level_+"_nhood_enrichment"]["zscore"][i, j]
                if pval>-1:
                    G.add_edge(nodes[i], nodes[j], weight=(pval))

        pos = nx.spring_layout(G,  k=0.15,seed=42)
        size = size[size.index.isin(pos.keys())]
        size = size.sort_index()
        colors = colors[colors.cells.isin(pos.keys())]
        colors = dict(zip(colors['cells'], colors[0]))

        edge_widths = [d['weight'] for u, v, d in G.edges(data=True)]

        size = dict(zip(size.index, size[level_]))
        node_size = [size[node] for node in G.nodes()]
        node_colors = [colors[node] for node in G.nodes()]
        
        plt.figure(figsize=(20, 20))
        sc = nx.draw_networkx_nodes(G, pos, node_color=node_colors, alpha=0.5, node_size=np.array(node_size)/2)
        nx.draw_networkx_edges(G, pos, edge_color="black", alpha=0.5, width=0.25*(np.array(edge_widths)/5))
        nx.draw_networkx_labels(G, pos, font_size=20, font_color="black")
        plt.axis("off")
        
        legend1 = plt.legend(*sc.legend_elements("sizes", num=6),  
                   bbox_to_anchor=(1, 1), 
                   prop={'size': 80},
                   title = '# cells in cluster',
                  frameon = False)
        
        
        lines = []
        edges_weight_list = sorted(np.array(edge_widths))
        integers = get_range(edges_weight_list)
        for i, width in enumerate(integers):
            lines.append(Line2D([],[], linewidth=0.25*(width/5), color='black'))

        legend2 = plt.legend(lines,integers,prop={'size': 20}, bbox_to_anchor=(0, 0.5), frameon=False, ) 
        
        plt.gca().add_artist(legend1)
        plt.gca().add_artist(legend2)


        plt.rcParams['pdf.fonttype'] = 42
        plt.rcParams['ps.fonttype'] = 42
        plt.rcParams['svg.fonttype'] = 'none'
        plt.savefig(path+'B_'+int_group+'_nhood_enrichment.svg', bbox_inches="tight", dpi = 500)
        
        sq.gr.interaction_matrix(adata_int, cluster_key=level_, normalized = False)
        #sq.pl.interaction_matrix(adata_int, cluster_key=level_,)# vmax = 5000, method="ward",)

        df = pd.DataFrame(adata_int.uns[level_+'_interactions'])
        df_filt = df#[df.sum() > df.sum().quantile(0.6)]
        df_filt = df_filt.T
        df_filt = df_filt[df_filt.index.isin(df_filt.columns)]
        colors =pd.DataFrame(adata_int.uns[level_+'_colors'])
        colors = colors[colors.index.isin(df_filt.columns)][0]
        categories = pd.DataFrame(adata_int.obs[level_].cat.categories)
        categories = categories[categories.index.isin(df_filt.columns)][0]
        df_filt.index = categories
        df_filt.columns = categories
        import random
        randomlist = []
        for i in range(0,19):
            n = random.uniform(0,1,)
            randomlist.append(n)
        #df.index= adata_int.obs.level3.cat.categories
        #df.columns= adata_int.obs.level3.cat.categories
        with plt.rc_context({'figure.figsize': (10, 10), 'figure.dpi':100}):
            chord_diagram(df_filt, names = list(categories), 
                          rotate_names = True, fontcolor = 'black',
                          fontsize=10,colors = list(colors), alpha = 0.90,
                         sort = 'distance', use_gradient= True, show= False)
            plt.rcParams['pdf.fonttype'] = 42
            plt.rcParams['ps.fonttype'] = 42
            plt.rcParams['svg.fonttype'] = 'none'
            plt.savefig(path+'B_'+int_group+'_'+time+'_interaction_matrix.svg', bbox_inches="tight")
        plt.show()

生活很好直奋,有你更好

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市施禾,隨后出現(xiàn)的幾起案子帮碰,更是在濱河造成了極大的恐慌,老刑警劉巖拾积,帶你破解...
    沈念sama閱讀 216,372評論 6 498
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件殉挽,死亡現(xiàn)場離奇詭異,居然都是意外死亡拓巧,警方通過查閱死者的電腦和手機(jī)斯碌,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,368評論 3 392
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來肛度,“玉大人傻唾,你說我怎么就攤上這事。” “怎么了冠骄?”我有些...
    開封第一講書人閱讀 162,415評論 0 353
  • 文/不壞的土叔 我叫張陵伪煤,是天一觀的道長。 經(jīng)常有香客問我凛辣,道長抱既,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,157評論 1 292
  • 正文 為了忘掉前任扁誓,我火速辦了婚禮防泵,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘蝗敢。我一直安慰自己捷泞,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,171評論 6 388
  • 文/花漫 我一把揭開白布寿谴。 她就那樣靜靜地躺著锁右,像睡著了一般。 火紅的嫁衣襯著肌膚如雪讶泰。 梳的紋絲不亂的頭發(fā)上咏瑟,一...
    開封第一講書人閱讀 51,125評論 1 297
  • 那天,我揣著相機(jī)與錄音峻厚,去河邊找鬼响蕴。 笑死,一個胖子當(dāng)著我的面吹牛惠桃,可吹牛的內(nèi)容都是我干的浦夷。 我是一名探鬼主播,決...
    沈念sama閱讀 40,028評論 3 417
  • 文/蒼蘭香墨 我猛地睜開眼辜王,長吁一口氣:“原來是場噩夢啊……” “哼劈狐!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起呐馆,我...
    開封第一講書人閱讀 38,887評論 0 274
  • 序言:老撾萬榮一對情侶失蹤肥缔,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后汹来,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體续膳,經(jīng)...
    沈念sama閱讀 45,310評論 1 310
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,533評論 2 332
  • 正文 我和宋清朗相戀三年收班,在試婚紗的時候發(fā)現(xiàn)自己被綠了坟岔。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,690評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡摔桦,死狀恐怖社付,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤鸥咖,帶...
    沈念sama閱讀 35,411評論 5 343
  • 正文 年R本政府宣布燕鸽,位于F島的核電站,受9級特大地震影響啼辣,放射性物質(zhì)發(fā)生泄漏啊研。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,004評論 3 325
  • 文/蒙蒙 一熙兔、第九天 我趴在偏房一處隱蔽的房頂上張望悲伶。 院中可真熱鬧艾恼,春花似錦住涉、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,659評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至柳爽,卻和暖如春媳握,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背磷脯。 一陣腳步聲響...
    開封第一講書人閱讀 32,812評論 1 268
  • 我被黑心中介騙來泰國打工蛾找, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人赵誓。 一個月前我還...
    沈念sama閱讀 47,693評論 2 368
  • 正文 我出身青樓打毛,卻偏偏與公主長得像,于是被迫代替她去往敵國和親俩功。 傳聞我的和親對象是個殘疾皇子幻枉,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,577評論 2 353

推薦閱讀更多精彩內(nèi)容