1 / 13

GPU 编程加速

GPU 编程加速. -基于视频图像内容理解的 SIFT 算法. 视频、图像内容理解基本方法. Step 1. Feature Detection. Step 2. Feature Description. Step 3. Visual Vocabulary Machine Learning. Step 4. New Feature Generation. Model Training. 构建 Bag o f Word. The configuration of my GPU.

dyanne
Download Presentation

GPU 编程加速

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GPU编程加速 -基于视频图像内容理解的SIFT算法

  2. 视频、图像内容理解基本方法

  3. Step 1. Feature Detection Step 2. Feature Description Step 3. Visual Vocabulary Machine Learning Step 4. New Feature Generation Model Training

  4. 构建Bag of Word

  5. The configuration of my GPU 1. Total amount of global memory: 511 Mbytes 2. GPU Clock rate: 1468 MHz (1.47 GHz) 3. Memory Bus Width: 64-bit 4. Total amount of constant memory: 65536 bytes 5. Total amount of shared memory per block: 16384 bytes 6.Total number of registers available per block: 16384 7…….

  6. The CPU Version • Step1. 一个一个的读入object(128维向量) • Step2. 每个object要与n 个同是128维的特征向量做运算 • Step3.挑出与本object距离最小的前三个聚类,归纳进去 • 缺点:串行!消耗大量时间!!!

  7. The GPU 1.0 Version • Step1.把特征点object和聚类点clusters读入GPU显存中 • Step2.开出与object数目一样多的线程 • Step3.根据ThredIdx和BlockIdx得到每个Thread 负责的计算部分 • Step4.所有线程同时进行运算,最后把结果返回到内存中 • 缺点:大量重复访问显存,Latency!!!

  8. The GPU 2.1 Version • Shared memory: • 1.每个Block可以被分配的一块空间,可以为Block内所有线程共享。 • 2.线程从share memory里存取数据时间几乎可以忽略(可以类比cache) • 3.但shared memory又不是cache,因为GPU内并无类似在cache中的寻址算法,替换算法

  9. The GPU 2.1 Version • 利用shared memory: • 算法与1.0版本几乎一致。 • 在将数据读入显存后,还要线程们一起合作把归由自己的Block所管的数据读入shared memory中,之后所有线程对数据的访问都使用share memory. • 缺点:share memory容量太小,一般不够用,因此要将原始数据分块,这会导致索引机制的复杂化!!

  10. The GPU 2.1 Version • 其他优化: • 1.线程如何分工读取数据 • 1.1)按行读 • 1.2)按列读。 • 2.最大效率的利用每个SM的资源 • 一个Block内开多少个Thread比较合适呢? • SM中的资源=寄存器+线程块槽+线程槽

  11. 性能对比

  12. 还需改进的地方 不足之处: 1.Share memory实在太小,还有部分重复存取的数据根本无法放入share memory,只好还是到显存去读,这成了程序的瓶颈。 2.由于增加了share memory,引入了新的cost,比如:要增加同步的次数;索引的运算等 改进方法:1.引进纹理内存 2.重构输入数据

  13. 谢谢!!! • 算法设计:/ • 编码:/ • ppt制作:/ • 陈耀辉 102415

More Related