Nuttx相關的歷史文章:
介紹
Nuttx的內存管理模塊代碼,位于nuttx/mm
目錄下,分別有以下幾個子目錄:
- mm_heap:通用堆分配器相關代碼
- umm_heap:用戶模式下堆分配器相關代碼
- kmm_heap:內核模式下堆分配器相關代碼
- mm_gran:顆粒分配器相關代碼
- shm:共享內存相關代碼
介紹如下:
nuttx/mm
目錄包含了Nuttx內存管理單元的邏輯,包括:
- 標準內存管理函數(shù)
標準函數(shù)
標準的函數(shù)接口就如stdlib.h
中描述一樣,按IEEE Std 1003.1-2003
中來規(guī)定的喘垂。包括以下文件:
標準的接口:mm_malloc.c, mm_calloc.c, mm_realloc.c, mm_memalign.c, mm_free.c
不那么標準的接口:mm_zalloc.c, mm_mallinfo.c
內部實現(xiàn)接口:mm_initialize.c, mm_sem.c, mm_addfreechunk.c, mm_size2ndx.c, mm_shrinkchunk.c
編譯和配置文件:Kconfig, Makefile
內存模型
小內存模型:如果MCU
只支持16-bit數(shù)據(jù)尋址的話,則自動使用小內存模型,堆的最大size為64K滑进。通過配置CONFIG_SMALL_MEMORY
可以在更寬尋址的MCU上強制使用小內存模型犀忱。
大內存模型:分配器支持堆的大小最大到4G的范圍。
這是通過使用一個可變長分配器來實現(xiàn)的郊供,包括以下屬性:
開銷:大內存模型每次分配8個bytes峡碉,小內存模型每次分配4個bytes;
對齊:大內存模型8字節(jié)對齊驮审,小內存模型4字節(jié)對齊鲫寄。
堆分配的多種實現(xiàn)
這個分配器可以用于管理不同的堆。用數(shù)據(jù)結構struct_mm_heap_s
結構來描述堆疯淫,這個結構定義在include/nuttx/mm/mm.h
文件中地来。如果要創(chuàng)建一個堆的實例,你需要分配一個堆的結構熙掺,一般都是靜態(tài)分配未斑,比如:
include <nuttx/mm/mm.h>
static struct mm_heap_s g_myheap;
初始化則使用接口:
mm_initialize(&g_myheap, myheap_start, myheap_size);
當堆的實例被初始化后,就能被大多數(shù)的接口使用币绩,比如:mm_malloc(), mm_realloc(), mm_free()
等蜡秽,這些接口看起來很熟悉,因為跟malloc(), realloc(), free()
接口類似缆镣,不同的地方是芽突,前邊的接口需要把堆的實例當做參數(shù)傳遞進去。事實上董瞻,malloc(), realloc(), free()
底層都是調用mm_malloc(), mm_realloc(), mm_free()
接口來實現(xiàn)的寞蚌。用戶/內核堆
可以通過內核的配置選項,來支持用戶模式堆和內核模式堆钠糊,子目錄包括:
mm/mm_heap
:該目錄存放所有堆分配器的基礎邏輯挟秤;
mm/umm_heap
:該目錄存放用戶模式堆分配接口;
mm/kmm_heap
:該目錄存放內核模式堆分配接口抄伍;
- 顆粒分配器
mm_gran
目錄提供了一個顆粒分配器艘刚,顆粒分配器以固定大小塊分配內存,分配可以與用戶提供的地址邊界對齊逝慧。顆粒分配器接口在nuttx/include/nuttx/mm/gran.h
頭文件中定義昔脯,在這個目錄中包含了實現(xiàn)的邏輯代碼文件:mm_gran.h, mm_granalloc.c, mm_grancritical.c, mm_granfree.c, mm_graninit.c
。
在Nuttx中笛臣,顆粒分配器用的并不廣泛云稚,顆粒分配器的目的是提供一個工具來支持平臺特定的DMA內存對齊。
注意:由于每個顆辽虮ぃ可能是對齊的静陈,并且每個分配都是以顆粒大小為單位的,因此顆粒的大小選擇是很重要的:較大的顆粒將提供更好的性能和更少的開銷,但是由于量化浪費而造成更多的內存損失鲸拥。對齊可能造成額外的內存浪費拐格,堆對齊不應該被使用,除非:1)使用顆粒分配器來管理DMA內存刑赶;2)硬件有特定的內存對齊需求捏浊。
當前的實現(xiàn)將最大分配限制大小限制為32個顆粒。這種限制也可以通過額外的編碼工作來消除撞叨,但是目前需要更大的粒度來進行更大的分配金踪。
使用例子:
通過使用GCC section
屬性來在內存中定位一個DMA的堆(在鏈接腳本中將.dmaheap分配給DMA內存)
FAR uint32_t g_dmaheap[DMAHEAP_SIZE] __attribute__((section(.dmaheap)));
通過調用gran_initialize()
接口來創(chuàng)建堆,假設顆粒設置的大小為64Byte牵敷,按16Byte對齊:
GRAN_HANDLE handle = gran_initialize(g_dmaheap, DMAHEAP_SIZE, 6, 4);
此時胡岔,GRAN_HANDLE
能被用于分配內存了(如果CONFIG_GRAN_SINGLE=y
的話,GRAN_HANDLE
不會被定義)
FAR uint8_t *dma_memory = (FAR uint8_t *)gran_alloc(handle, 47);
實際分配的內存為64Byte(浪費17Bytes)枷餐,并且會對齊到至少(1 << log2align)
- 頁分配器
頁分配器是基于顆粒分配器的一個應用靶瘸,它是一種特殊用途的內存分配器,用于為具有內存管理單元(MMU)的系統(tǒng)分配物理內存頁毛肋。
它的邏輯代碼也位于mm_gran
目錄下怨咪。
- 共享內存管理
當Nuttx編譯成內核模式時,地址空間都是分開的润匙。具有特權的內核地址空間與不具備特權的用戶模式地址之間的內存共享需要被管理起來惊暴。共享內存區(qū)域是用戶可訪問的區(qū)域,可以附加到用戶進程地址空間中趁桃,以便在用戶進程之間共享。
共享內存的邏輯代碼位于mm/shm
目錄下肄鸽。
數(shù)據(jù)結構
內存管理相關的數(shù)據(jù)結構不多卫病,關鍵的有三個,如下所示:
/* This describes an allocated chunk. An allocated chunk is
* distinguished from a free chunk by bit 15/31 of the 'preceding' chunk
* size. If set, then this is an allocated chunk.
*/
struct mm_allocnode_s
{
mmsize_t size; /* Size of this chunk */
mmsize_t preceding; /* Size of the preceding chunk */
};
/* This describes a free chunk */
struct mm_freenode_s
{
mmsize_t size; /* Size of this chunk */
mmsize_t preceding; /* Size of the preceding chunk */
FAR struct mm_freenode_s *flink; /* Supports a doubly linked list */
FAR struct mm_freenode_s *blink;
};
/* This describes one heap (possibly with multiple regions) */
struct mm_heap_s
{
/* Mutually exclusive access to this data set is enforced with
* the following un-named semaphore.
*/
sem_t mm_semaphore;
pid_t mm_holder;
int mm_counts_held;
/* This is the size of the heap provided to mm */
size_t mm_heapsize;
/* This is the first and last nodes of the heap */
FAR struct mm_allocnode_s *mm_heapstart[CONFIG_MM_REGIONS];
FAR struct mm_allocnode_s *mm_heapend[CONFIG_MM_REGIONS];
#if CONFIG_MM_REGIONS > 1
int mm_nregions;
#endif
/* All free nodes are maintained in a doubly linked list. This
* array provides some hooks into the list at various points to
* speed searches for free nodes.
*/
struct mm_freenode_s mm_nodelist[MM_NNODES];
};
-
struct mm_allocnode_s
用于描述已經(jīng)分配的內存塊典徘,這個數(shù)據(jù)結構中preceding
成員的15bit/31bit的值用于標記該內存塊是否已經(jīng)釋放掉了蟀苛,具體是哪一位來標記,跟使用的內存模型有關系逮诲。 -
struct mm_freenode_s
描述一個空閑的內存塊帜平,其中空閑的內存塊都會連接成一個雙向鏈表。 -
struct mm_heap_s
描述堆的結構梅鹦,有兩處需要注意的:1)mm_heapstart/mm_heapend
用于描述堆的起始和結束裆甩,這個相當于是兩個哨兵,用于確保分配是在這兩個哨兵的中間齐唆,然后會在這兩個哨兵中間創(chuàng)建一個內存節(jié)點嗤栓;2)mm_nodelist
存放所有空閑的內存塊,這個結構是一個數(shù)組,數(shù)組里的元素又是雙向鏈表茉帅,數(shù)組的大小為MM_NNODES
叨叙,它的值為(MM_MAX_SHIFT - MM_MIN_SHIFT + 1)
,MM_MIN_SHIFT = 4
對應16bytes堪澎,MM_MAX_SHIFT = 22
對應4Mb擂错,這么設置是類似于linux buddy system
的機制,內存塊都以2的次冪來劃分樱蛤,這個數(shù)組每一項就對應2的某次冪的雙向鏈表钮呀。
此外,有一點需要注意的是內存分配在低層按chunk塊去組織刹悴,實際上一個塊需要包含兩部分的內容:header + payload
行楞,也就是頭部信息+實際可用的內存。
如圖所示:
內存管理
原理分析
mm_heap/kmm_heap/umm_heap/
以mm_malloc()
和mm_free()
為例來分析:
從上圖可知土匀,mm_nodelist[]
存放的是不同大小的內存塊雙向鏈表子房,按2的次冪,比如16,32,64,128,256,512等來劃分就轧,如果內存塊的大小為16-32之間证杭,那就存放在16對應的雙向鏈表中,并按大小進行排序妒御。以此類推解愤。
mm_malloc()
- 當進行內存分配的時候,申請
size
大小的內存空間乎莉,先對size
進行對齊調整送讲,然后再根據(jù)調整后的size對2求冪運算,從而找到mm_nodelist[]
的索引值惋啃,進而找到最匹配的雙向鏈表哼鬓; - 遍歷雙向鏈表(鏈表已經(jīng)按大小排序),找到第一個大于申請
size
的chunk
边灭; -
chunk
的大小大于申請的size
异希,所以會將chunk
分成兩個chunk
,一個是申請部分node
用于返回給申請者,需要從鏈表中移除,另一個是剩余部分remainder
重新添加回堆結構中善茎,根據(jù)remainder
部分的大小券册,對2求冪,找到合適的空閑鏈表,將該結構插入到對應的鏈表中烁焙。 - 在申請過程中航邢,會去將
preceding
成員中設置MM_ALLOC_BIT
位,用于標記內存塊已經(jīng)被分配了骄蝇。
在雙向鏈表的操作過程中九火,數(shù)據(jù)結構是離散組織的赚窃,但是物理內存都是連續(xù)的一片區(qū)域。
mm_malloc
/****************************************************************************
* Name: mm_malloc
*
* Description:
* Find the smallest chunk that satisfies the request. Take the memory from
* that chunk, save the remaining, smaller chunk (if any).
*
* 8-byte alignment of the allocated data is assured.
*
****************************************************************************/
FAR void *mm_malloc(FAR struct mm_heap_s *heap, size_t size)
{
FAR struct mm_freenode_s *node;
void *ret = NULL;
int ndx;
/* Handle bad sizes */
if (size < 1)
{
return NULL;
}
/* Adjust the size to account for (1) the size of the allocated node and
* (2) to make sure that it is an even multiple of our granule size.
*/
size = MM_ALIGN_UP(size + SIZEOF_MM_ALLOCNODE);
/* We need to hold the MM semaphore while we muck with the nodelist. */
mm_takesemaphore(heap);
/* Get the location in the node list to start the search. Special case
* really big allocations
*/
if (size >= MM_MAX_CHUNK)
{
ndx = MM_NNODES-1;
}
else
{
/* Convert the request size into a nodelist index */
ndx = mm_size2ndx(size);
}
/* Search for a large enough chunk in the list of nodes. This list is
* ordered by size, but will have occasional zero sized nodes as we visit
* other mm_nodelist[] entries.
*/
for (node = heap->mm_nodelist[ndx].flink;
node && node->size < size;
node = node->flink);
/* If we found a node with non-zero size, then this is one to use. Since
* the list is ordered, we know that is must be best fitting chunk
* available.
*/
if (node)
{
FAR struct mm_freenode_s *remainder;
FAR struct mm_freenode_s *next;
size_t remaining;
/* Remove the node. There must be a predecessor, but there may not be
* a successor node.
*/
DEBUGASSERT(node->blink);
node->blink->flink = node->flink;
if (node->flink)
{
node->flink->blink = node->blink;
}
/* Check if we have to split the free node into one of the allocated
* size and another smaller freenode. In some cases, the remaining
* bytes can be smaller (they may be SIZEOF_MM_ALLOCNODE). In that
* case, we will just carry the few wasted bytes at the end of the
* allocation.
*/
remaining = node->size - size;
if (remaining >= SIZEOF_MM_FREENODE)
{
/* Get a pointer to the next node in physical memory */
next = (FAR struct mm_freenode_s *)(((FAR char *)node) + node->size);
/* Create the remainder node */
remainder = (FAR struct mm_freenode_s *)(((FAR char *)node) + size);
remainder->size = remaining;
remainder->preceding = size;
/* Adjust the size of the node under consideration */
node->size = size;
/* Adjust the 'preceding' size of the (old) next node, preserving
* the allocated flag.
*/
next->preceding = remaining | (next->preceding & MM_ALLOC_BIT);
/* Add the remainder back into the nodelist */
mm_addfreechunk(heap, remainder);
}
/* Handle the case of an exact size match */
node->preceding |= MM_ALLOC_BIT;
ret = (void *)((FAR char *)node + SIZEOF_MM_ALLOCNODE);
}
mm_givesemaphore(heap);
/* If CONFIG_DEBUG_MM is defined, then output the result of the allocation
* to the SYSLOG.
*/
#ifdef CONFIG_DEBUG_MM
if (!ret)
{
mwarn("WARNING: Allocation failed, size %d\n", size);
}
else
{
minfo("Allocated %p, size %d\n", ret, size);
}
#endif
return ret;
}
mm_free
- 當內存進行釋放的時候虑鼎,先將內存地址(
payload
)減去SIZEOF_MM_ALLOCNODE
偏移辱匿,這個偏移是chunk
的頭部大小,從而得到整個chunk
的描述符炫彩,將該chunk
標記成空閑的狀態(tài)匾七; - 檢查
chunk
的下一個節(jié)點狀態(tài),如果也是空閑的狀態(tài)江兢,則進行內存合并昨忆; - 檢查
chunk
的上一個節(jié)點狀態(tài),如果也是空閑的狀態(tài)杉允,則進行內存合并邑贴;
注意,此處檢查節(jié)點叔磷,是物理上的連接的塊痢缎,這些塊可能由于大小不一致,導致描述這些塊的數(shù)據(jù)結構世澜,會位于不同的鏈表中。如下圖:
mm_free
/****************************************************************************
* Name: mm_free
*
* Description:
* Returns a chunk of memory to the list of free nodes, merging with
* adjacent free chunks if possible.
*
****************************************************************************/
void mm_free(FAR struct mm_heap_s *heap, FAR void *mem)
{
FAR struct mm_freenode_s *node;
FAR struct mm_freenode_s *prev;
FAR struct mm_freenode_s *next;
minfo("Freeing %p\n", mem);
/* Protect against attempts to free a NULL reference */
if (!mem)
{
return;
}
/* We need to hold the MM semaphore while we muck with the
* nodelist.
*/
mm_takesemaphore(heap);
/* Map the memory chunk into a free node */
node = (FAR struct mm_freenode_s *)((FAR char *)mem - SIZEOF_MM_ALLOCNODE);
node->preceding &= ~MM_ALLOC_BIT;
/* Check if the following node is free and, if so, merge it */
next = (FAR struct mm_freenode_s *)((FAR char *)node + node->size);
if ((next->preceding & MM_ALLOC_BIT) == 0)
{
FAR struct mm_allocnode_s *andbeyond;
/* Get the node following the next node (which will
* become the new next node). We know that we can never
* index past the tail chunk because it is always allocated.
*/
andbeyond = (FAR struct mm_allocnode_s *)((FAR char *)next + next->size);
/* Remove the next node. There must be a predecessor,
* but there may not be a successor node.
*/
DEBUGASSERT(next->blink);
next->blink->flink = next->flink;
if (next->flink)
{
next->flink->blink = next->blink;
}
/* Then merge the two chunks */
node->size += next->size;
andbeyond->preceding = node->size | (andbeyond->preceding & MM_ALLOC_BIT);
next = (FAR struct mm_freenode_s *)andbeyond;
}
/* Check if the preceding node is also free and, if so, merge
* it with this node
*/
prev = (FAR struct mm_freenode_s *)((FAR char *)node - node->preceding);
if ((prev->preceding & MM_ALLOC_BIT) == 0)
{
/* Remove the node. There must be a predecessor, but there may
* not be a successor node.
*/
DEBUGASSERT(prev->blink);
prev->blink->flink = prev->flink;
if (prev->flink)
{
prev->flink->blink = prev->blink;
}
/* Then merge the two chunks */
prev->size += node->size;
next->preceding = prev->size | (next->preceding & MM_ALLOC_BIT);
node = prev;
}
/* Add the merged node to the nodelist */
mm_addfreechunk(heap, node);
mm_givesemaphore(heap);
}
標準庫中的malloc()/free()
函數(shù)就是調用mm_malloc()/mm_free()
接口來實現(xiàn)的署穗,在malloc()
中寥裂,還調用了sbrk()
函數(shù),sbrk()
低層是調用mm_sbrk()
接口案疲,它的作用就是用來擴展堆的區(qū)域封恰,上文中提到過堆結構中有一個成員mm_heapend
用于存放的是堆的尾部,mm_sbrk()
接口會去擴展尾部區(qū)域褐啡,最終擴大堆的空間诺舔。
umm_heap/, kmm_heap/
兩個路徑下的代碼都是調用mm_heap/
目錄中的接口來實現(xiàn),因此邏輯都是一致的。
mm_gran/
mm_gran
目錄下存放的是顆粒分配器的邏輯代碼低飒,關鍵的數(shù)據(jù)結構為struct gran_s
:
/* This structure represents the state of one granule allocation */
struct gran_s
{
uint8_t log2gran; /* Log base 2 of the size of one granule */
uint16_t ngranules; /* The total number of (aligned) granules in the heap */
#ifdef CONFIG_GRAN_INTR
irqstate_t irqstate; /* For exclusive access to the GAT */
#else
sem_t exclsem; /* For exclusive access to the GAT */
#endif
uintptr_t heapstart; /* The aligned start of the granule heap */
uint32_t gat[1]; /* Start of the granule allocation table */
};
-
log2gran
许昨,描述的是顆粒的大小對2取對數(shù)的值,比如褥赊,log2gran = 4
糕档,則表明顆粒的大小為16byte; -
ngranules
拌喉,描述的是整個堆中速那,顆粒的個數(shù)溪食; -
irqstate/exclsem
兰伤,用于顆粒分配表的互斥訪問; -
heapstart
盒揉,堆的起始地址田藐; -
gat[]
荔烧,顆粒分配表的起始地址,這個數(shù)組元素只有一個坞淮,只是用于標記它是一個地址茴晋,并且該地址存放的是32位的數(shù)值,從該地址可以繼續(xù)往后擴展回窘;顆粒分配表數(shù)組中诺擅,每個元素為32位的值,每一位用于標記對應的顆粒是否已經(jīng)被分配啡直,這也就對應到顆粒分配時烁涌,每次最大只能分配32個顆粒。
原理如下圖:
顆粒分配器
gran_alloc()
gran_alloc()
是調用gran_common_alloc()
接口來完成的
- 根據(jù)申請分配的
size
酒觅,得出需要分配顆粒granule
的數(shù)量ngranules
撮执; - 查詢顆粒分配表,找到
ngranules
個連續(xù)的顆粒區(qū)域舷丹,查找的過程中抒钱,顆粒的索引號以32為步長進行增加,也就是32個顆粒為一個跨度來查詢颜凯;同時顆粒的索引號可以對應到顆粒分配表中表項的索引號谋币,比如如果顆粒索引號為1-31
之間,對應的就是gat[0]
症概,如果是32-63
之間蕾额,對應的就是gat[1]
; - 在堆顆粒分配表中的表項進行位處理的時候,采用的是類似于二分法的策略彼城,每次折半來判斷比特位的狀態(tài)诅蝶,對應到顆粒是否被分配的狀態(tài)退个,并對表項的值進行移位處理,此外调炬,需要注意的是跨兩個區(qū)域的處理语盈,也就是申請的內存區(qū)域,可能是兩部分組成:前32個顆粒的結束部分筐眷,后32個顆粒的開始部分黎烈。
static inline FAR void *gran_common_alloc(FAR struct gran_s *priv, size_t size)
{
unsigned int ngranules;
size_t tmpmask;
uintptr_t alloc;
uint32_t curr;
uint32_t next;
uint32_t mask;
int granidx;
int gatidx;
int bitidx;
int shift;
DEBUGASSERT(priv && size <= 32 * (1 << priv->log2gran));
if (priv && size > 0)
{
/* Get exclusive access to the GAT */
gran_enter_critical(priv);
/* How many contiguous granules we we need to find? */
tmpmask = (1 << priv->log2gran) - 1;
ngranules = (size + tmpmask) >> priv->log2gran;
/* Then create mask for that number of granules */
DEBUGASSERT(ngranules <= 32);
mask = 0xffffffff >> (32 - ngranules);
/* Now search the granule allocation table for that number of contiguous */
alloc = priv->heapstart;
for (granidx = 0; granidx < priv->ngranules; granidx += 32)
{
/* Get the GAT index associated with the granule table entry */
gatidx = granidx >> 5;
curr = priv->gat[gatidx];
/* Handle the case where there are no free granules in the entry */
if (curr == 0xffffffff)
{
alloc += (32 << priv->log2gran);
continue;
}
/* Get the next entry from the GAT to support a 64 bit shift */
if (granidx < priv->ngranules)
{
next = priv->gat[gatidx + 1];
}
/* Use all ones when are at the last entry in the GAT (meaning
* nothing can be allocated.
*/
else
{
next = 0xffffffff;
}
/* Search through the allocations in the 'curr' GAT entry
* to see if we can satisfy the allocation starting in that
* entry.
*
* This loop continues until either all of the bits have been
* examined (bitidx >= 32), or until there are insufficient
* granules left to satisfy the allocation.
*/
for (bitidx = 0;
bitidx < 32 && (granidx + bitidx + ngranules) <= priv->ngranules;
)
{
/* Break out if there are no further free bits in 'curr'.
* All of the zero bits might have gotten shifted out.
*/
if (curr == 0xffffffff)
{
break;
}
/* Check for the first zero bit in the lower or upper 16-bits.
* From the test above, we know that at least one of the 32-
* bits in 'curr' is zero.
*/
else if ((curr & 0x0000ffff) == 0x0000ffff)
{
/* Not in the lower 16 bits. The first free bit must be
* in the upper 16 bits.
*/
shift = 16;
}
/* We know that the first free bit is now within the lower 16
* bits of 'curr'. Is it in the upper or lower byte?
*/
else if ((curr & 0x0000ff) == 0x000000ff)
{
/* Not in the lower 8 bits. The first free bit must be in
* the upper 8 bits.
*/
shift = 8;
}
/* We know that the first free bit is now within the lower 4
* bits of 'curr'. Is it in the upper or lower nibble?
*/
else if ((curr & 0x00000f) == 0x0000000f)
{
/* Not in the lower 4 bits. The first free bit must be in
* the upper 4 bits.
*/
shift = 4;
}
/* We know that the first free bit is now within the lower 4 bits
* of 'curr'. Is it in the upper or lower pair?
*/
else if ((curr & 0x000003) == 0x00000003)
{
/* Not in the lower 2 bits. The first free bit must be in
* the upper 2 bits.
*/
shift = 2;
}
/* We know that the first free bit is now within the lower 4 bits
* of 'curr'. Check if we have the allocation at this bit position.
*/
else if ((curr & mask) == 0)
{
/* Yes.. mark these granules allocated */
gran_mark_allocated(priv, alloc, ngranules);
/* And return the allocation address */
gran_leave_critical(priv);
return (FAR void *)alloc;
}
/* The free allocation does not start at this position */
else
{
shift = 1;
}
/* Set up for the next time through the loop. Perform a 64
* bit shift to move to the next gran position andi ncrement
* to the next candidate allocation address.
*/
alloc += (shift << priv->log2gran);
curr = (curr >> shift) | (next << (32 - shift));
next >>= shift;
bitidx += shift;
}
}
gran_leave_critical(priv);
}
return NULL;
}
gran_free()
同樣的,gran_free()
調用gran_common_free()
接口來完成內存釋放的匀谣。
- 根據(jù)釋放的內存地址照棋,得出第一個顆粒的索引號;
- 根據(jù)釋放的
size
得出要釋放顆粒的總數(shù)ngranules
武翎; - 判斷
ngranules
是否超出了顆粒分配表中表項對應的可用顆粒數(shù)烈炭,超出了表明這個是跨區(qū)域分配的,釋放的時候宝恶,需要修改兩個顆粒分配表表項符隙,否則只需要修改一個。
看代碼吧:
static inline void gran_common_free(FAR struct gran_s *priv,
FAR void *memory, size_t size)
{
unsigned int granno;
unsigned int gatidx;
unsigned int gatbit;
unsigned int granmask;
unsigned int ngranules;
unsigned int avail;
uint32_t gatmask;
DEBUGASSERT(priv && memory && size <= 32 * (1 << priv->log2gran));
/* Get exclusive access to the GAT */
gran_enter_critical(priv);
/* Determine the granule number of the first granule in the allocation */
granno = ((uintptr_t)memory - priv->heapstart) >> priv->log2gran;
/* Determine the GAT table index and bit number associated with the
* allocation.
*/
gatidx = granno >> 5;
gatbit = granno & 31;
/* Determine the number of granules in the allocation */
granmask = (1 << priv->log2gran) - 1;
ngranules = (size + granmask) >> priv->log2gran;
/* Clear bits in the GAT entry or entries */
avail = 32 - gatbit;
if (ngranules > avail)
{
/* Clear bits in the first GAT entry */
gatmask = (0xffffffff << gatbit);
DEBUGASSERT((priv->gat[gatidx] & gatmask) == gatmask);
priv->gat[gatidx] &= ~gatmask;
ngranules -= avail;
/* Clear bits in the second GAT entry */
gatmask = 0xffffffff >> (32 - ngranules);
DEBUGASSERT((priv->gat[gatidx+1] & gatmask) == gatmask);
priv->gat[gatidx+1] &= ~gatmask;
}
/* Handle the case where where all of the granules came from one entry */
else
{
/* Clear bits in a single GAT entry */
gatmask = 0xffffffff >> (32 - ngranules);
gatmask <<= gatbit;
DEBUGASSERT((priv->gat[gatidx] & gatmask) == gatmask);
priv->gat[gatidx] &= ~gatmask;
}
gran_leave_critical(priv);
}
在nuttx中垫毙,頁分配機制就是基于顆粒分配器來實現(xiàn)的霹疫。
shm/
調用接口
共享內存只有Nuttx在內核編譯模式下(CONFIG_BUILD_KERNEL=y
)時才可用,包括了以下幾個接口:
-
int shmget(key_t key, size_t size, int shmflg)
:獲取key
對應的共享內存描述符综芥; -
void *shmat(int shmid, FAR const void *shmaddr, int shmflg)
:將shmid
對應的共享內存描述符指定的內存段關聯(lián)到調用進程的地址空間丽蝎; -
int shmctl(int shmid, int cmd, FAR struct shmid_ds *buf)
:提供cmd
指定的各種共享內存控制操作; -
int shmdt(FAR const void *shmaddr)
:將shmaddr
指定的共享內存段從調用進程的地址空間中分離出來膀藐;
數(shù)據(jù)結構
核心數(shù)據(jù)結構如下:
/* Unsigned integer used for the number of current attaches that must be
* able to store values at least as large as a type unsigned short.
*/
typedef unsigned short shmatt_t;
struct shmid_ds
{
struct ipc_perm shm_perm; /* Operation permission structure */
size_t shm_segsz; /* Size of segment in bytes */
pid_t shm_lpid; /* Process ID of last shared memory operation */
pid_t shm_cpid; /* Process ID of creator */
shmatt_t shm_nattch; /* Number of current attaches */
time_t shm_atime; /* Time of last shmat() */
time_t shm_dtime; /* Time of last shmdt() */
time_t shm_ctime; /* Time of last change by shmctl() */
};
/* This structure represents the state of one shared memory region
* allocation. Cast compatible with struct shmid_ds.
*/
/* Bit definitions for the struct shm_region_s sr_flags field */
#define SRFLAG_AVAILABLE 0 /* Available if no flag bits set */
#define SRFLAG_INUSE (1 << 0) /* Bit 0: Region is in use */
#define SRFLAG_UNLINKED (1 << 1) /* Bit 1: Region perists while references */
struct shm_region_s
{
struct shmid_ds sr_ds; /* Region info */
bool sr_flags; /* See SRFLAGS_* definitions */
key_t sr_key; /* Lookup key */
sem_t sr_sem; /* Manages exclusive access to this region */
/* List of physical pages allocated for this memory region */
uintptr_t sr_pages[CONFIG_ARCH_SHM_NPAGES];
};
/* This structure represents the set of all shared memory regions.
* Access to the region
*/
struct shm_info_s
{
sem_t si_sem; /* Manages exclusive access to the region list */
struct shm_region_s si_region[CONFIG_ARCH_SHM_MAXREGIONS];
};
-
struct shm_info_s
:描述的是所有共享內存區(qū)域的集合屠阻,并且需要控制互斥訪問,在實現(xiàn)中使用了該結構來定義了一個全局變量g_shminfo
额各,表示所有的共享內存區(qū)域国觉; -
struct shm_region_s
:描述的是一個共享內存區(qū)域的信息,共享內存區(qū)域的使用標記虾啦,對應的key值麻诀,共享內存區(qū)域大小等; -
struct shmid_ds
:描述的是一個內存區(qū)域的信息傲醉,主要包括權限值针饥、process ID值,以及不同操作的時間值需频;
shared memory
int shmget(key_t key, size_t size, int shmflg)
- 通過
key
值去查找共享內存區(qū)域集合中的每個區(qū)域,看看是否能找到匹配的結構筷凤; - 如果沒有找到昭殉,則需要調用
shm_create()
接口去創(chuàng)建一個苞七,實際上這些都是靜態(tài) 預留好的,只需要去struct shm_region_s si_region[]
數(shù)組中找尋一個可用的挪丢,并且做一些初始化設置即可蹂风; - 如果找到了,則判斷這個共享區(qū)域的大小乾蓬,是否符合申請的大小惠啄,不夠的話,還需要調用
shm_extend()
接口去把共享物理內存區(qū)域進行擴大任内。
int shmget(key_t key, size_t size, int shmflg)
{
FAR struct shm_region_s *region;
int shmid = -1;
int ret;
/* Check for the special case where the caller doesn't really want shared
* memory (they why do they bother to call us?)
*/
if (key == IPC_PRIVATE)
{
/* Not yet implemented */
ret = -ENOSYS;
goto errout;
}
/* Get exclusive access to the global list of shared memory regions */
ret = sem_wait(&g_shminfo.si_sem);
if (ret >= 0)
{
/* Find the requested memory region */
ret = shm_find(key);
if (ret < 0)
{
/* The memory region does not exist.. create it if IPC_CREAT is
* included in the shmflags.
*/
if ((shmflg & IPC_CREAT) != 0)
{
/* Create the memory region */
ret = shm_create(key, size, shmflg);
if (ret < 0)
{
shmerr("ERROR: shm_create failed: %d\n", ret);
goto errout_with_semaphore;
}
/* Return the shared memory ID */
shmid = ret;
}
else
{
/* Fail with ENOENT */
goto errout_with_semaphore;
}
}
/* The region exists */
else
{
/* Remember the shared memory ID */
shmid = ret;
/* Is the region big enough for the request? */
region = &g_shminfo.si_region[shmid];
if (region->sr_ds.shm_segsz < size)
{
/* We we asked to create the region? If so we can just
* extend it.
*
* REVISIT: We should check the mode bits of the regions
* first
*/
if ((shmflg & IPC_CREAT) != 0)
{
/* Extend the region */
ret = shm_extend(shmid, size);
if (ret < 0)
{
shmerr("ERROR: shm_create failed: %d\n", ret);
goto errout_with_semaphore;
}
}
else
{
/* Fail with EINVAL */
ret = -EINVAL;
goto errout_with_semaphore;
}
}
/* The region is already big enough or else we successfully
* extended the size of the region. If the region was previously
* deleted, but waiting for processes to detach from the region,
* then it is no longer deleted.
*/
region->sr_flags = SRFLAG_INUSE;
}
/* Release our lock on the shared memory region list */
sem_post(&g_shminfo.si_sem);
}
return shmid;
errout_with_semaphore:
sem_post(&g_shminfo.si_sem);
errout:
set_errno(-ret);
return ERROR;
}
shmat()/shmdt()
這兩個函數(shù)用于將指定的地址與進程的地址空間建立關聯(lián)或者解除關聯(lián)撵渡。
以shmat()
為例,在一個用戶進程調用該接口時死嗦,會去調用gran_alloc
顆粒分配器接口分配一段虛擬地址空間趋距,而共享內存對應的是一段物理空間內存,因此需要去調用架構相關的函數(shù)越除,完成這段虛擬地址空間节腐,到物理共享內存之間的映射,簡而言之摘盆,就是去修改頁表項內容翼雀。一般體系結構代碼會提供類似up_shmat()
的接口。
shmdt()
的原理也是一樣的孩擂,最終通過清除頁表項內容狼渊,解除關聯(lián)。
總結
nuttx中內存管理肋殴,核心為兩部分:1)mm_heap/
下囤锉,對物理內存的分配采用類似于Buddy System的機制,適用于在plat mode
編譯模式下护锤;2)mm_gran/
下官地,顆粒分配器,這個是分頁機制的基礎烙懦,同時也是共享內存的使用基礎驱入,用于內核編譯模式下。