敞开成长之旅!这是我参加「日新计划 12 月更文应战」的第13天,点击检查活动概况
作者:耿宗杰
前言
关于pprof的文章在网上已是汗牛充栋,却是千人一面的指令介绍,鲜有真实实操的,本文将参考Go社区材料,结合自己的经历,实战Go程序的功用剖析与优化进程。
优化思路
首要说一下功用优化的一般思路。体系功用的剖析优化,必定是从大到小的进程来进行的,即从事务架构的优化,到体系架构的优化,再到体系模块间的优化,最终到代码编写层面的优化。事务架构的优化是最具性价比的,技能难度相对较小,却能够带来大幅的功用提高。比方经过和同事或外部门交流,削减了一些接口调用或者去掉了不必要的杂乱的事务逻辑,能够轻松提高整个体系的功用。
体系架构的优化,比方参加缓存,由http改善为rpc等,也能够在少量投入下带来较大的功用提高。最终是程序代码级别的功用优化,这又分为两方面,一是合格的数据结构与运用,二才是在此基础上的功用剖析。比方在Go言语中运用slice这种便利的数据结构时,尽或许提早恳求满足的内存防止append超过容量时的内存恳求和数据拷贝;运用并发维护时尽量由RWMutex 替代mutex,甚至在极高并发场景下运用更细粒度的原子操作替代锁等等。
优化实践
下面进入正文,待优化程序是社区中一个例子,代码有点长,实现的算法是闻名的核算机科学家Tarjan的求图的强连通分量算法,关于这个算法的思维请自行google(就别自行百度了~)。以下为实操进程(会有那么一丢丢长。。。):
初始版别代码 havlak1.go:
// Go from multi-language-benchmark/src/havlak/go_pro// Copyright 2011 Google Inc.//// Licensed under the Apache License, Version 2.0 (the "License");// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// http://www.apache.org/licenses/LICENSE-2.0//// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an "AS IS" BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License.// Test Program for the Havlak loop finder.//// This program constructs a fairly large control flow// graph and performs loop recognition. This is the Go// version.//package mainimport ( "flag" "fmt" "log" "os" "runtime/pprof")type BasicBlock struct { Name int InEdges []*BasicBlock OutEdges []*BasicBlock}func NewBasicBlock(name int) *BasicBlock { return &BasicBlock{Name: name}}func (bb *BasicBlock) Dump() { fmt.Printf("BB#%06d:", bb.Name) if len(bb.InEdges) > 0 { fmt.Printf(" in :") for _, iter := range bb.InEdges { fmt.Printf(" BB#%06d", iter.Name) } } if len(bb.OutEdges) > 0 { fmt.Print(" out:") for _, iter := range bb.OutEdges { fmt.Printf(" BB#%06d", iter.Name) } } fmt.Printf("\n")}func (bb *BasicBlock) NumPred() int { return len(bb.InEdges)}func (bb *BasicBlock) NumSucc() int { return len(bb.OutEdges)}func (bb *BasicBlock) AddInEdge(from *BasicBlock) { bb.InEdges = append(bb.InEdges, from)}func (bb *BasicBlock) AddOutEdge(to *BasicBlock) { bb.OutEdges = append(bb.OutEdges, to)}//-----------------------------------------------------------type CFG struct { Blocks []*BasicBlock Start *BasicBlock}func NewCFG() *CFG { return &CFG{}}func (cfg *CFG) NumNodes() int { return len(cfg.Blocks)}func (cfg *CFG) CreateNode(node int) *BasicBlock { if node < len(cfg.Blocks) { return cfg.Blocks[node] } if node != len(cfg.Blocks) { println("oops", node, len(cfg.Blocks)) panic("wtf") } bblock := NewBasicBlock(node) cfg.Blocks = append(cfg.Blocks, bblock) if len(cfg.Blocks) == 1 { cfg.Start = bblock } return bblock}func (cfg *CFG) Dump() { for _, n := range cfg.Blocks { n.Dump() }}//-----------------------------------------------------------type BasicBlockEdge struct { Dst *BasicBlock Src *BasicBlock}func NewBasicBlockEdge(cfg *CFG, from int, to int) *BasicBlockEdge { self := new(BasicBlockEdge) self.Src = cfg.CreateNode(from) self.Dst = cfg.CreateNode(to) self.Src.AddOutEdge(self.Dst) self.Dst.AddInEdge(self.Src) return self}//-----------------------------------------------------------// Basic Blocks and Loops are being classified as regular, irreducible,// and so on. This enum contains a symbolic name for all these classifications//const ( _ = iota // Go has an interesting iota concept bbTop // uninitialized bbNonHeader // a regular BB bbReducible // reducible loop bbSelf // single BB loop bbIrreducible // irreducible loop bbDead // a dead BB bbLast // sentinel)// UnionFindNode is used in the Union/Find algorithm to collapse// complete loops into a single node. These nodes and the// corresponding functionality are implemented with this class//type UnionFindNode struct { parent *UnionFindNode bb *BasicBlock loop *SimpleLoop dfsNumber int}// Init explicitly initializes UnionFind nodes.//func (u *UnionFindNode) Init(bb *BasicBlock, dfsNumber int) { u.parent = u u.bb = bb u.dfsNumber = dfsNumber u.loop = nil}// FindSet implements the Find part of the Union/Find Algorithm//// Implemented with Path Compression (inner loops are only// visited and collapsed once, however, deep nests would still// result in significant traversals).//func (u *UnionFindNode) FindSet() *UnionFindNode { var nodeList []*UnionFindNode node := u for ; node != node.parent; node = node.parent { if node.parent != node.parent.parent { nodeList = append(nodeList, node) } } // Path Compression, all nodes' parents point to the 1st level parent. for _, ll := range nodeList { ll.parent = node.parent } return node}// Union relies on path compression.//func (u *UnionFindNode) Union(B *UnionFindNode) { u.parent = B}// Constants//// Marker for uninitialized nodes.const unvisited = -1// Safeguard against pathological algorithm behavior.const maxNonBackPreds = 32 * 1024// IsAncestor//// As described in the paper, determine whether a node 'w' is a// "true" ancestor for node 'v'.//// Dominance can be tested quickly using a pre-order trick// for depth-first spanning trees. This is why DFS is the first// thing we run below.//// Go comment: Parameters can be written as w,v int, inlike in C, where// each parameter needs its own type.//func isAncestor(w, v int, last []int) bool { return ((w <= v) && (v <= last[w]))}// listContainsNode//// Check whether a list contains a specific element. //func listContainsNode(l []*UnionFindNode, u *UnionFindNode) bool { for _, ll := range l { if ll == u { return true } } return false}// DFS - Depth-First-Search and node numbering.//func DFS(currentNode *BasicBlock, nodes []*UnionFindNode, number map[*BasicBlock]int, last []int, current int) int { nodes[current].Init(currentNode, current) number[currentNode] = current lastid := current for _, target := range currentNode.OutEdges { if number[target] == unvisited { lastid = DFS(target, nodes, number, last, lastid+1) } } last[number[currentNode]] = lastid return lastid}// FindLoops//// Find loops and build loop forest using Havlak's algorithm, which// is derived from Tarjan. Variable names and step numbering has// been chosen to be identical to the nomenclature in Havlak's// paper (which, in turn, is similar to the one used by Tarjan).//func FindLoops(cfgraph *CFG, lsgraph *LSG) { if cfgraph.Start == nil { return } size := cfgraph.NumNodes() nonBackPreds := make([]map[int]bool, size) backPreds := make([][]int, size) number := make(map[*BasicBlock]int) header := make([]int, size, size) types := make([]int, size, size) last := make([]int, size, size) nodes := make([]*UnionFindNode, size, size) for i := 0; i < size; i++ { nodes[i] = new(UnionFindNode) } // Step a: // - initialize all nodes as unvisited. // - depth-first traversal and numbering. // - unreached BB's are marked as dead. // for i, bb := range cfgraph.Blocks { number[bb] = unvisited nonBackPreds[i] = make(map[int]bool) } DFS(cfgraph.Start, nodes, number, last, 0) // Step b: // - iterate over all nodes. // // A backedge comes from a descendant in the DFS tree, and non-backedges // from non-descendants (following Tarjan). // // - check incoming edges 'v' and add them to either // - the list of backedges (backPreds) or // - the list of non-backedges (nonBackPreds) // for w := 0; w < size; w++ { header[w] = 0 types[w] = bbNonHeader nodeW := nodes[w].bb if nodeW == nil { types[w] = bbDead continue // dead BB } if nodeW.NumPred() > 0 { for _, nodeV := range nodeW.InEdges { v := number[nodeV] if v == unvisited { continue // dead node } if isAncestor(w, v, last) { backPreds[w] = append(backPreds[w], v) } else { nonBackPreds[w][v] = true } } } } // Start node is root of all other loops. header[0] = 0 // Step c: // // The outer loop, unchanged from Tarjan. It does nothing except // for those nodes which are the destinations of backedges. // For a header node w, we chase backward from the sources of the // backedges adding nodes to the set P, representing the body of // the loop headed by w. // // By running through the nodes in reverse of the DFST preorder, // we ensure that inner loop headers will be processed before the // headers for surrounding loops. // for w := size - 1; w >= 0; w-- { // this is 'P' in Havlak's paper var nodePool []*UnionFindNode nodeW := nodes[w].bb if nodeW == nil { continue // dead BB } // Step d: for _, v := range backPreds[w] { if v != w { nodePool = append(nodePool, nodes[v].FindSet()) } else { types[w] = bbSelf } } // Copy nodePool to workList. // workList := append([]*UnionFindNode(nil), nodePool...) if len(nodePool) != 0 { types[w] = bbReducible } // work the list... // for len(workList) > 0 { x := workList[0] workList = workList[1:] // Step e: // // Step e represents the main difference from Tarjan's method. // Chasing upwards from the sources of a node w's backedges. If // there is a node y' that is not a descendant of w, w is marked // the header of an irreducible loop, there is another entry // into this loop that avoids w. // // The algorithm has degenerated. Break and // return in this case. // nonBackSize := len(nonBackPreds[x.dfsNumber]) if nonBackSize > maxNonBackPreds { return } for iter := range nonBackPreds[x.dfsNumber] { y := nodes[iter] ydash := y.FindSet() if !isAncestor(w, ydash.dfsNumber, last) { types[w] = bbIrreducible nonBackPreds[w][ydash.dfsNumber] = true } else { if ydash.dfsNumber != w { if !listContainsNode(nodePool, ydash) { workList = append(workList, ydash) nodePool = append(nodePool, ydash) } } } } } // Collapse/Unionize nodes in a SCC to a single node // For every SCC found, create a loop descriptor and link it in. // if (len(nodePool) > 0) || (types[w] == bbSelf) { loop := lsgraph.NewLoop() loop.SetHeader(nodeW) if types[w] != bbIrreducible { loop.IsReducible = true } // At this point, one can set attributes to the loop, such as: // // the bottom node: // iter = backPreds[w].begin(); // loop bottom is: nodes[iter].node); // // the number of backedges: // backPreds[w].size() // // whether this loop is reducible: // type[w] != BasicBlockClass.bbIrreducible // nodes[w].loop = loop for _, node := range nodePool { // Add nodes to loop descriptor. header[node.dfsNumber] = w node.Union(nodes[w]) // Nested loops are not added, but linked together. if node.loop != nil { node.loop.Parent = loop } else { loop.AddNode(node.bb) } } lsgraph.AddLoop(loop) } // nodePool.size } // Step c}// External entry point.func FindHavlakLoops(cfgraph *CFG, lsgraph *LSG) int { FindLoops(cfgraph, lsgraph) return lsgraph.NumLoops()}//======================================================// Scaffold Code//======================================================// Basic representation of loops, a loop has an entry point,// one or more exit edges, a set of basic blocks, and potentially// an outer loop - a "parent" loop.//// Furthermore, it can have any set of properties, e.g.,// it can be an irreducible loop, have control flow, be// a candidate for transformations, and what not.//type SimpleLoop struct { // No set, use map to bool basicBlocks map[*BasicBlock]bool Children map[*SimpleLoop]bool Parent *SimpleLoop header *BasicBlock IsRoot bool IsReducible bool Counter int NestingLevel int DepthLevel int}func (loop *SimpleLoop) AddNode(bb *BasicBlock) { loop.basicBlocks[bb] = true}func (loop *SimpleLoop) AddChildLoop(child *SimpleLoop) { loop.Children[child] = true}func (loop *SimpleLoop) Dump(indent int) { for i := 0; i < indent; i++ { fmt.Printf(" ") } // No ? operator ? fmt.Printf("loop-%d nest: %d depth %d ", loop.Counter, loop.NestingLevel, loop.DepthLevel) if !loop.IsReducible { fmt.Printf("(Irreducible) ") } // must have > 0 if len(loop.Children) > 0 { fmt.Printf("Children: ") for ll := range loop.Children { fmt.Printf("loop-%d", ll.Counter) } } if len(loop.basicBlocks) > 0 { fmt.Printf("(") for bb := range loop.basicBlocks { fmt.Printf("BB#%06d ", bb.Name) if loop.header == bb { fmt.Printf("*") } } fmt.Printf("\b)") } fmt.Printf("\n")}func (loop *SimpleLoop) SetParent(parent *SimpleLoop) { loop.Parent = parent loop.Parent.AddChildLoop(loop)}func (loop *SimpleLoop) SetHeader(bb *BasicBlock) { loop.AddNode(bb) loop.header = bb}//------------------------------------// Helper (No templates or such)//func max(x, y int) int { if x > y { return x } return y}// LoopStructureGraph//// Maintain loop structure for a given CFG.//// Two values are maintained for this loop graph, depth, and nesting level.// For example://// loop nesting level depth//----------------------------------------// loop-0 2 0// loop-1 1 1// loop-3 1 1// loop-2 0 2//var loopCounter = 0type LSG struct { root *SimpleLoop loops []*SimpleLoop}func NewLSG() *LSG { lsg := new(LSG) lsg.root = lsg.NewLoop() lsg.root.NestingLevel = 0 return lsg}func (lsg *LSG) NewLoop() *SimpleLoop { loop := new(SimpleLoop) loop.basicBlocks = make(map[*BasicBlock]bool) loop.Children = make(map[*SimpleLoop]bool) loop.Parent = nil loop.header = nil loop.Counter = loopCounter loopCounter++ return loop}func (lsg *LSG) AddLoop(loop *SimpleLoop) { lsg.loops = append(lsg.loops, loop)}func (lsg *LSG) Dump() { lsg.dump(lsg.root, 0)}func (lsg *LSG) dump(loop *SimpleLoop, indent int) { loop.Dump(indent) for ll := range loop.Children { lsg.dump(ll, indent+1) }}func (lsg *LSG) CalculateNestingLevel() { for _, sl := range lsg.loops { if sl.IsRoot { continue } if sl.Parent == nil { sl.SetParent(lsg.root) } } lsg.calculateNestingLevel(lsg.root, 0)}func (lsg *LSG) calculateNestingLevel(loop *SimpleLoop, depth int) { loop.DepthLevel = depth for ll := range loop.Children { lsg.calculateNestingLevel(ll, depth+1) ll.NestingLevel = max(loop.NestingLevel, ll.NestingLevel+1) }}func (lsg *LSG) NumLoops() int { return len(lsg.loops)}func (lsg *LSG) Root() *SimpleLoop { return lsg.root}//======================================================// Testing Code//======================================================func buildDiamond(cfgraph *CFG, start int) int { bb0 := start NewBasicBlockEdge(cfgraph, bb0, bb0+1) NewBasicBlockEdge(cfgraph, bb0, bb0+2) NewBasicBlockEdge(cfgraph, bb0+1, bb0+3) NewBasicBlockEdge(cfgraph, bb0+2, bb0+3) return bb0 + 3}func buildConnect(cfgraph *CFG, start int, end int) { NewBasicBlockEdge(cfgraph, start, end)}func buildStraight(cfgraph *CFG, start int, n int) int { for i := 0; i < n; i++ { buildConnect(cfgraph, start+i, start+i+1) } return start + n}func buildBaseLoop(cfgraph *CFG, from int) int { header := buildStraight(cfgraph, from, 1) diamond1 := buildDiamond(cfgraph, header) d11 := buildStraight(cfgraph, diamond1, 1) diamond2 := buildDiamond(cfgraph, d11) footer := buildStraight(cfgraph, diamond2, 1) buildConnect(cfgraph, diamond2, d11) buildConnect(cfgraph, diamond1, header) buildConnect(cfgraph, footer, from) footer = buildStraight(cfgraph, footer, 1) return footer}var cpuprofile = flag.String("cpuprofile", "", "write cpu profile to this file")func main() { flag.Parse() if *cpuprofile != "" { f, err := os.Create(*cpuprofile) if err != nil { log.Fatal(err) } pprof.StartCPUProfile(f) defer pprof.StopCPUProfile() } lsgraph := NewLSG() cfgraph := NewCFG() cfgraph.CreateNode(0) // top cfgraph.CreateNode(1) // bottom NewBasicBlockEdge(cfgraph, 0, 2) for dummyloop := 0; dummyloop < 15000; dummyloop++ { FindHavlakLoops(cfgraph, NewLSG()) } n := 2 for parlooptrees := 0; parlooptrees < 10; parlooptrees++ { cfgraph.CreateNode(n + 1) buildConnect(cfgraph, 2, n+1) n = n + 1 for i := 0; i < 100; i++ { top := n n = buildStraight(cfgraph, n, 1) for j := 0; j < 25; j++ { n = buildBaseLoop(cfgraph, n) } bottom := buildStraight(cfgraph, n, 1) buildConnect(cfgraph, n, top) n = bottom } buildConnect(cfgraph, n, 1) } FindHavlakLoops(cfgraph, lsgraph) for i := 0; i < 50; i++ { FindHavlakLoops(cfgraph, NewLSG()) } fmt.Printf("# of loops: %d (including 1 artificial root node)\n", lsgraph.NumLoops()) lsgraph.CalculateNestingLevel()}
咱们借助macbook体系上的time指令来打印程序运转的时刻(内核态、用户态、总时刻):
编译后运转程序:
用户态耗时23.07s,内核态耗时0.4s,总耗时13.7s(用了两个核,170%)。由于程序里面现已先敞开了pprof核算cpu耗时,直接top指令看:
能够看到,pprof数据收集持续了12.99s,采样时长是19.49s(还是两核的原因)。
这儿要说一下,无论是cpu耗时核算还是内存占用核算,都是距离采样。cpu耗时时每隔一段时刻(大概是10ms)对调用栈的函数进行记载,最终剖析在一切的记载次数中,各个函数呈现的次数,包括在运转中的次数,和入栈次数(阐明它调用了别的函数)。内存占用核算是每分配512K记载一次分配路径。
耗时最多的是mapaccess1_fast64,这是运转时中的map读写时的数据查询函数。如果编译程序时没有禁用内联,看到的会有所不同,其中会显示FindHavlakLoops函数,并标识为inline。由于FindHavlakLoops里面就调用了FindLoops,所以在编译器会直接把这个函数打开,用FindLoops替换FindHavlakLoops函数。也能够在不禁用内联编译时,设置pprof的noinlines开关为true,默认为false,即noinlines=true。
这儿看到最大的函数并不是事务函数而是体系函数,那没啥好优化体系函数的(只能是咱用的不对了呗~)。就看是哪里调用的这个体系函数:
web mapaccess1_fast64
能够看到,调用最多的是DFS和FindLoops。那就看看这俩函数里面是怎样运用map的,来看DFS:
能够看到,DFS函数里耗时较长又是map操作的,是242 246 和250三行。对于这儿的优化办法,是用list结构替代map结构,由于能用list到达作用的情况下没必要用map。这个咋一看没问题,但如同又有点别扭对吧?其实是由于这个程序的自身特色,这儿有显着的一点,便是BasicBlock结构的特别性,自身的Name特点便是自身的索引下标。查找某个BasicBlock不需求运用map来记载,直接经过下标访问显然是最低的时刻杂乱度(关于map和list的时刻杂乱度就不多说了,map看起来是O1,但其实没有考虑hash核算自身的进程和处理hash冲突的成本,而list是必然的O1)。经过把这部分的map修改为list数据结构,版别H2,再编译检查耗时情况:
此刻耗时下降到了8s。再次检查cpu耗时散布:
能够看到,top1变成了scanobject函数,不再是map读写了。看到这个函数,以及下边的mallocgc,咱们要知道,这都是内存相关的,前者是内存目标扫描的处理函数,是废物收回三色标记的一个进程,后者则是直接办理内存分配和收回的(留意是一起担任内存分配和收回,malloc && gc)。这阐明现在程序花了许多时刻在进行内存办理,看份额是8%。那就要剖析一下了,是什么东西产生了这么多内存变化,合不合理。剖析内存,便是pprof的memprofile功用了。增加一下相关的内存核算代码,具体怎样增加咱们必定都会,就不贴了(网上多得是)。增加完之后,从头编译生成版别H3,履行H3,生成对应的内存监测文件:
检查内存的分配情况,在这儿不禁止内联,由于在实践运转时该内联的函数也是会被打开替换掉的:
能够看到,节点创立是榜首,FindLoops是第二。由于创立BasicBlock首要很简单,没有杂乱的进程,其次这是程序中的一个基础目标结构,若要改结构体,那或许涉及到算法也得改,这个显然是不合适的,不只或许改出bug,还或许收益不高。那咱们就顺着看第二位的FindLoops,这个便是前边咱们看到的调用mapaccess内置函数的另一个事务函数,所以优化的办法跟上面类似,还是优化map的运用,替换为list。这儿有一点特别的是,替换成list之后,map的追加需求改一下,加一个判断重复的逻辑,所以新加了一个appendUnique办法。再次编译,版别H4,检查耗时:
这时程序耗时降到了5.6s。再次检查内存分配,确认优化作用:
能够看到,FindLoops函数现已不在高位,内存耗费降下去了。再看一下此刻的cpu耗时散布:
能够看到,FindLoops成为top1,scanobject函数成了第二位。这就对了,由于咱们便是要让cpu更多的去运转事务代码,把时刻花到真实需求的地方。
这时就能够进行下一轮的功用优化了(这便是功用调优,一遍一遍的排查耗时,紧缩不必要的cpu时刻,压榨核算功用)。持续看一下此刻FindLoops到底在哪儿化的时刻多,是否合理:
从各个语句的耗时来看,如同没啥大问题,没有很离谱的耗时(360ms那行是由于循环的原因)。这阐明编码上没有大问题,依照咱们前边一开始说的程序优化的进程,就需求往上找了,看能不能优化逻辑来削减不必要的核算,以到达提高功用的意图(即,每一个核算进程处理的都没啥大问题了,那能不能少算点儿)。这儿FindLoops在程序入口,花了不少时刻来初始化一系列暂时变量,加起来有180ms,这就意味着每次调用FindLoops函数,都要先花180ms的准备工作。这部分暂时变量的多次创立+初始化,能够经过加内存缓存的办法来削减重复创立和恳求,这儿涉及到的规范解法其实便是目标池(像Go创立的web服务,在并发量很高的情况下,每一次http恳求的解析都需求创立目标+恳求内存+反序列这一系列的初始动作,这时就能够借助sync.Pool来削减这种重复工作,提高功用)。同理,这儿也是参加了一个内存缓存目标cache:
把本来的内存恳求初始化进程做了替换。在原有缓存不行的情况下,恳求新的变量,否者截取原有缓存运用,削减内存恳求:
调整结束后,编译新版别H5,再看下耗时:
这时候程序的耗时现已降到4.1s,比较一开始的13.7s,现已提高了两倍多。看下现在的cpu耗时散布:
能够看到,比较上次的散布,前两位都是事务代码函数了,阐明进一步提高了事务代码的耗时占比,下降了无关体系函数的负载。这种直接运用全局变量的办法加cache不是并发安全的,但是由于这儿程序的逻辑自身也不是并发的,所以这儿没问题。
到这儿,实操的优化进程就走完了。提炼总结一下优化的进程和运用的主要办法指令有:
1.经过top5检查cpu耗时:确认是字典数据操作占比最大;
2.经过web指令检查是谁主要调用了字典操作函数:确认有DFS和FindLoops;
3.运用list 指令检查DFS函数中每行代码的耗时,找到跟map操作相关的部分,确认优化办法:把map换成list;
4.从头运转,再用top指令检查cpu耗时散布:找到耗时最大的是内存目标扫描函数scanobject;
5.参加内存剖析代码,运转检查内存分配的大小散布:确认FindLoops占用较大;
6.运用list指令检查FindLoops函数的每行内存开销:确认是map创立占用内存较大;
7.相同运用list数据结构替换map,从头运转,再看内存大小散布:确认FindLoops已不在高位,一起再看CPU耗时时,scanobject现已降下去了,意图到达;
8.此刻又开始新的一轮排查,持续检查cpu耗时排行榜,FindLoops耗时居首;
9.持续运用list办法看FindLoops函数每行的耗时,没有显着的问题。那就要换思路,从排查编码问题转换为排查逻辑问题,削减核算进程:这儿是加缓存;
10.加完缓存看到功用显着提高了,阐明优化对了。这时又该循环进入下一轮的排查的优化了,以此往复。。。直到压榨出咱们能到达的最大功用!
以上便是本次程序优化的全体进程和思路,进程中秉持的思路办法是一贯的,便是不断的用pprof排查top级的函数耗时和内存占用,经过优化数据结构、削减不必要的核算进程、下降内存分配和收回的担负等办法来提高功用,这一点对一切的程序都适用,也是后续能够借鉴的办法论。
两点感悟
1.优化程序的大前提是你必定要对程序有满足深化的了解! (或者说咱们不能优化程序优化出bug来啊。。。)。最终出产H6版别,之所以又对功用提高了一倍,全是建立在作者对程序完全了解的基础之上的,所以他才能够调整循环逻辑,复用程序目标,调整调用逻辑等等。这一层再往上层思考,就到了程序逻辑,体系架构内置全体的事务架构层面了。这其实就又回到了一开始咱们说的程序优化由大到小的总体思路上了。
2.带GC的言语比较不带的,反而更考验内存办理的技能。Go言语在开发效率上的提高,是把废物收回交给了体系来处理,但别忘了,耗费的依然是咱们自己的cpu时刻(羊毛,那不得从羊身上来…)。所以咱们在运用每种结构体,进行每种核算操作时,都要明白其背后涉及的内存开销,要把内存变化放到潜意识里去办理。
以上便是本次pprof优化Go程序的实操进程及总结感想,供参考,感谢~