Object pooling is a relatively common practice in games. By reusing your GameObjects instead of destroying and recreating them you can save precious CPU cycles. It is easy to find a lot of free scripts and tutorials on the subject – even Unity has provided one in a Live Training session. Their presentation, while a great introduction, was not what I would consider production ready. In this post I’ll share my thoughts on their implementation as well as how I would improve upon it.
Unity Live Training
The Unity Live Training on Object Pooling was provided by Mike Geig here – Unity Live Training demo. I don’t personally know Mike, but in general I assume that if he is putting together content on Unity’s website that he must meet some minimum level of criteria as a professional. I looked him up and see that he is also an author and college instructor. Good enough for me! If you haven’t watched the video, feel free to do so – it provides a great introduction to the topic. The video is 49 minutes and there isn’t any obvious place to copy his code so if you would rather skip it and just check out the code, I have provided a general copy of it below for reference sake.
His demo can be boiled down to three basic scripts:
- A script which recycles a bullet after a period of time.
- A script which rapidly fires bullets (which are reused from the object pool).
- A script which manages the object pool itself.
using UnityEngine; using System.Collections; public class BulletDestroyScript : MonoBehaviour { void OnEnable () { Invoke("Destroy", 2f); } void Destroy () { gameObject.SetActive(false); } void OnDisable () { CancelInvoke("Destroy"); } }
using UnityEngine; using System.Collections; public class BulletFireScript : MonoBehaviour { public float fireTime = 0.05f; void Start () { InvokeRepeating("Fire", fireTime, fireTime); } void Fire () { GameObject obj = ObjectPoolerScript.current.GetPooledObject(); if (obj == null) return; // Position the bullet obj.SetActive(true); } }
using UnityEngine; using System.Collections; using System.Collections.Generic; public class ObjectPoolerScript : MonoBehaviour { public static ObjectPoolerScript current; public GameObject pooledObject; public int pooledAmount = 20; public bool willGrow = true; List<GameObject> pooledObjects; void Awake () { current = this; } void Start () { pooledObjects = new List<GameObject>(); for (int i = 0; i < pooledAmount; ++i) { GameObject obj = (GameObject)Instantiate(pooledObject); obj.SetActive(false); pooledObjects.Add(obj); } } public GameObject GetPooledObject () { for (int i = 0; i < pooledObjects.Count; ++i) { if (!pooledObjects[i].activeInHierarchy) { return pooledObjects[i]; } } if (willGrow) { GameObject obj = (GameObject)Instantiate(pooledObject); pooledObjects.Add(obj); return obj; } return null; } }
The bullet firing and recycling scripts are not really that important here – they are just an example of how to use the pooling system. The ObjectPoolerScript holds the important stuff, and there are several areas which I would want to highlight for improvement.
The first area for improvement is reusability. The script itself was separated into its own component in order to be reusable, but in its current form it is really only reusable between different projects. However, I would say that it is not reusable within the same project – and that would be far more important. Here’s why: the script only holds a reference to a single prefab. If you wanted to pool different types of bullets, powerups, or enemies, etc, you would need to use a new script for each one. Note that you couldn’t simply add this component multiple times and assign different prefabs, because the class somewhat implements the singleton design pattern. Whichever script was the last one to Awake would have the static class reference called “current”. The other instances would not have a good way to be found or differentiated from each other.
I like that this system can prepopulate the object pool. I also like that it can grow where necessary. A minor feature request we could add is to say HOW MUCH it is allowed to grow. Specifying a maximum count could be helpful in some scenarios.
The next area for improvement is in regard to how an object is considered “pooled” or “not pooled” based on whether or not its GameObject is active in the scene hierarchy. This is problematic for multiple reasons such as:
- The object is not made active before providing it to a consumer, and there is no way to know when a consumer will activate its object. This means that the pool might erroneously provide the same object to multiple consumers.
- Because the pool is checking activeInHierarchy, any parent object which is disabled will cause the pooled object to become marked as reusable – which may have been an unintended and unexpected consequence.
- The entire ancestral hierarchy has to be checked to determine whether or not an object is available for use – much slower than having a boolean somewhere.
This implementation never checks the validity of its own pooled items. For example, if you have a pool of objects, a consumer takes the object and parents it to another object, and then that parent object is destroyed, you wont be able to save the pooled object from destruction. The pool manager will then crash the next time it checks the index of the destroyed object. In a system where you pull objects out of a pool and add them back in, the pool could choose not to add objects which are null, and therefore add a degree of safety.
My own pooling implementation was based off the use of a generic Queue. It naturally added and removed objects from its own collection. This particular issue was personally called out by Mike as a bad idea. He said that removing and inserting objects from a collection was “really costly” and even replied to a direct question about using two different lists so that you wouldn’t need to search for an acceptable object to return. His reply was basically, “Searching is so much more efficient than managing two different lists”.
My initial thought was, “Huh?” The search system he had implemented would be somewhere between O(1) and O(n) depending on how quickly the object found a valid reusable object. Let’s just say on average it would be O(n/2). A Queue’s Enqueue and Dequeue methods are both O(1) operations regardless of the size of the system. Even if you were using two Lists for management, you can Add in O(1) as long as the capacity doesn’t need to change, and you can RemoveAt from the end of a list in O(1) time as well.
O(1) and O(n) are examples of Big O notation which is just a nerdy way of indicating how long an algorithm will take to execute with regard to its input. O(1) means constant time – the only way to be faster is to not do anything at all. O(n) means linear time – the larger the system, the slower the process.
My next thought was that even though adding to and removing from a queue maintains its speed regardless of its capacity, it doesn’t necessarily mean that it is fast. Perhaps Mike knew something I did not – I supposed it was possible that a small search based pool like his example could actually be faster than a pool which removes and adds objects back to a collection of objects. So I decided to test it out for myself. I created a new project and added a script which created two different types of pools: one pool maintained a fixed collection of objects and searched for a valid match as needed, a second pool maintained a queue which could add and remove poolable objects with no need of searching. The big question here is whether searching or modifying a collection will end up being more costly.
In Mike’s demo, a single spaceship needed about 41 bullets in a pool in order not to have any gaps in ability to fire. Since Mike also suggested that the pool could be reused for multiple characters (enemies, etc) it doesn’t seem unreasonable to require a pool of 100 bullets. Therefore, the test I created is based on pooling 100 bullets, and it loops a 1000 times where each loop gets and “uses” all of the pooled objects and then returns them back to the pool. I used a System.Diagnostics.Stopwatch to measure the amount of time required for each test and logged the result to the console.
Both tests proved that they could execute very fast, however, the results of the test were in my favor – having to search a list was about four times slower than a Queue system which actually added and removed objects from its collection.
- Completed SearchPool in 128 ms
- Completed QueuePool in 31 ms
Anyone wishing to test this for themselves or to verify that my test was fair and unbiased can see the code below. Otherwise, based on these test results, I see no reason to disregard my own implementation based on his warnings.
using UnityEngine; using System; using System.Collections; using System.Collections.Generic; using System.Diagnostics; public class PoolMe { public bool isPooled; } public abstract class BasePool { public abstract PoolMe GetPooledObject (); public abstract void ReturnPooledObject (PoolMe obj); } public class SearchPool : BasePool { List<PoolMe> pool; public SearchPool (int count) { pool = new List<PoolMe>(count); for (int i = 0; i < count; ++i) { PoolMe p = new PoolMe(); p.isPooled = true; pool.Add( p ); } } public override PoolMe GetPooledObject () { for (int i = 0; i < pool.Count; ++i) { if (pool[i].isPooled) { pool[i].isPooled = false; return pool[i]; } } return null; } public override void ReturnPooledObject (PoolMe obj) { obj.isPooled = true; } } public class QueuePool : BasePool { Queue<PoolMe> pool; public QueuePool (int count) { pool = new Queue<PoolMe>(count); for (int i = 0; i < count; ++i) ReturnPooledObject(new PoolMe()); } public override PoolMe GetPooledObject () { if (pool.Count > 0) { PoolMe retValue = pool.Dequeue(); retValue.isPooled = false; return retValue; } return null; } public override void ReturnPooledObject (PoolMe obj) { obj.isPooled = true; pool.Enqueue(obj); } } public class PoolingComparisonDemo : MonoBehaviour { const int objCount = 100; const int testCount = 1000; IEnumerator Start () { TestPool(new SearchPool(objCount)); yield return new WaitForSeconds(1); TestPool(new QueuePool(objCount)); } void TestPool (BasePool pool) { List<PoolMe> activeObjects = new List<PoolMe>( objCount ); Stopwatch watch = new Stopwatch(); watch.Start(); // Perform a repeating test of getting pooled objects and putting them back for (int i = 0; i < testCount; ++i) { // Get and "use" all items in the pool for (int j = 0; j < objCount; ++j) activeObjects.Add(pool.GetPooledObject()); // Put all items back in the pool for (int j = objCount - 1; j >= 0; --j) { pool.ReturnPooledObject(activeObjects[j]); activeObjects.RemoveAt(j); } } watch.Stop(); UnityEngine.Debug.Log( string.Format("Completed {0} in {1} ms", pool.GetType().Name, watch.Elapsed.Milliseconds) ); } }
My Implementation
Instead of deciding whether or not any particular object has been pooled or not based on whether or not it is active, I decided to add another component with a bool to indicate it for me. This allows for more flexibility, as well as greater safety, when compared with a system based on reusing the GameObject’s active flag as a pooling indicator.
using UnityEngine; using System.Collections; public class Poolable : MonoBehaviour { public string key; public bool isPooled; }
You may also have noticed that my poolable objects contain a key. That is because I wanted a system which could be reused for multiple different objects without needing to create new pooling managers for each one.
My controller, put simply, has a Dictionary which maps from a string key to a PoolData class which holds information like the Prefab to use for the instantiation of new objects, the max number of objects to keep around in memory, and the queue itself used to store the reusable objects. To use it, you first call AddEntry where you specify what key will map to what prefab, and tell it how many (if any) objects to precreate as well as the max number of objects to keep around in memory. In an ideal scenario you will have a good idea of how many objects are needed in an average game. Since you can use different values on the initial population count and maximum count, you have complete control over whether or not the pool can grow, and if so, by how much.
The methods I exposed for use are static – this means you dont need an instance reference to use the pooling manager, you only need a reference to the class itself. Static methods and properties are slightly faster than instance methods and properties. However, you lose some flexibility such as the ability to inherit and override functionality. Feel free to pick the pattern which best suits your needs.
Even though I used static methods, I still chose to create a private singleton instance. I use this GameObject for two main reasons which you may or may not care about:
- Organization – by parenting pooled objects to the pool manager, I can collapse their visibility in the Editor’s Hierarchy window. Great during development!
- Preservation – my pool manager can survive scene changes, and any pooled item which is in its hierarchy will also be able to survive. If you dont want this feature, just have any script which adds an entry also be sure to remove its entry when it is destroyed. Otherwise, if you are reusing objects in more than one scene, or might change back and forth between a scene repeatedly, then those subsequent load times will not be as long.
using UnityEngine; using System.Collections; using System.Collections.Generic; public class PoolData { public GameObject prefab; public int maxCount; public Queue<Poolable> pool; } public class GameObjectPoolController : MonoBehaviour { #region Fields / Properties static GameObjectPoolController Instance { get { if (instance == null) CreateSharedInstance(); return instance; } } static GameObjectPoolController instance; static Dictionary<string, PoolData> pools = new Dictionary<string, PoolData>(); #endregion #region MonoBehaviour void Awake () { if (instance != null && instance != this) Destroy(this); else instance = this; } #endregion #region Public public static void SetMaxCount (string key, int maxCount) { if (!pools.ContainsKey(key)) return; PoolData data = pools[key]; data.maxCount = maxCount; } public static bool AddEntry (string key, GameObject prefab, int prepopulate, int maxCount) { if (pools.ContainsKey(key)) return false; PoolData data = new PoolData(); data.prefab = prefab; data.maxCount = maxCount; data.pool = new Queue<Poolable>(prepopulate); pools.Add(key, data); for (int i = 0; i < prepopulate; ++i) Enqueue( CreateInstance(key, prefab) ); return true; } public static void ClearEntry (string key) { if (!pools.ContainsKey(key)) return; PoolData data = pools[key]; while (data.pool.Count > 0) { Poolable obj = data.pool.Dequeue(); GameObject.Destroy(obj.gameObject); } pools.Remove(key); } public static void Enqueue (Poolable sender) { if (sender == null || sender.isPooled || !pools.ContainsKey(sender.key)) return; PoolData data = pools[sender.key]; if (data.pool.Count >= data.maxCount) { GameObject.Destroy(sender.gameObject); return; } data.pool.Enqueue(sender); sender.isPooled = true; sender.transform.SetParent(Instance.transform); sender.gameObject.SetActive(false); } public static Poolable Dequeue (string key) { if (!pools.ContainsKey(key)) return null; PoolData data = pools[key]; if (data.pool.Count == 0) return CreateInstance(key, data.prefab); Poolable obj = data.pool.Dequeue(); obj.isPooled = false; return obj; } #endregion #region Private static void CreateSharedInstance () { GameObject obj = new GameObject("GameObject Pool Controller"); DontDestroyOnLoad(obj); instance = obj.AddComponent<GameObjectPoolController>(); } static Poolable CreateInstance (string key, GameObject prefab) { GameObject instance = Instantiate(prefab) as GameObject; Poolable p = instance.AddComponent<Poolable>(); p.key = key; return p; } #endregion }
Demo
I created a small demo to test out my pool manager and verify that everything worked as I expected. I created two scenes (make sure to add them to the build settings) where each one had my Demo script attached to an object (the scene camera). I changed the background on one scene to help visually differentiate scene changes. I also made use of OnGUI so there was no additional setup required outside of this one script. For the script’s prefab reference I simply created a Sphere.
The Demo displays four buttons on screen. The first two allow you to swap between scenes to verify that the pool manager and its pooled items survive. The next two allow you to Enqueue or Dequeue an object from the pool. Note that if you only Dequeue and Enqueue within the original capacity of the pool that no new items will ever need to be created. If you dequeue more than the initial population, new objects will be created to fulfill the request, but they will only be preserved in the pool up to the point specified by the maximum capacity. For example, if you specify an initial population of 10, a max population of 15, and then dequeue 20 items, 10 additional objects will have been created, only five of which will be stored in the pool when enqueued, the other five will just be destroyed.
As a quick note, I dont use OnGUI in a production project – I would use Unity’s new UI system instead. However, for the sake of a quick demo, OnGUI can be very nice because all of the setup can be provided with a single script, whereas the new UI would require a lot of setup with a Canvas, Panels, Buttons, linking events to scripts through the interface, etc.
using UnityEngine; using System.Collections; using System.Collections.Generic; public class Demo : MonoBehaviour { const string PoolKey = "Demo.Prefab"; [SerializeField] GameObject prefab; List<Poolable> instances = new List<Poolable>(); void Start () { if (GameObjectPoolController.AddEntry(PoolKey, prefab, 10, 15)) Debug.Log("Pre-populating pool"); else Debug.Log("Pool already configured"); } void OnGUI () { if (GUI.Button(new Rect(10, 10, 100, 30), "Scene 1")) ChangeLevel(0); if (GUI.Button(new Rect(10, 50, 100, 30), "Scene 2")) ChangeLevel(1); if (GUI.Button(new Rect(10, 90, 100, 30), "Dequeue")) { Poolable obj = GameObjectPoolController.Dequeue(PoolKey); float x = UnityEngine.Random.Range(-10, 10); float y = UnityEngine.Random.Range(0, 5); float z = UnityEngine.Random.Range(0, 10); obj.transform.localPosition = new Vector3(x, y, z); obj.gameObject.SetActive(true); instances.Add(obj); } if (GUI.Button(new Rect(10, 130, 100, 30), "Enqueue")) { if (instances.Count > 0) { Poolable obj = instances[0]; instances.RemoveAt(0); GameObjectPoolController.Enqueue(obj); } } } void ChangeLevel (int level) { ReleaseInstances(); Application.LoadLevel(level); } void ReleaseInstances () { for (int i = instances.Count - 1; i >= 0; --i) GameObjectPoolController.Enqueue(instances[i]); instances.Clear(); } }
Summary
In this lesson we explored the topic of object pooling and how to write a custom pooling manager. I shared my thoughts on the object pooling demo provided in Unity’s Live Training session and pointed out areas where I felt improvement was needed. I challenged the claim that a search pool was more efficient that adding and removing objects from a collection and provided both the test and test results for review. Finally, I provided my own implementation, which should be more flexible, secure, reusable, and efficient.
I hope you liked this post – I personally enjoy “recreating the wheel” like this because it can be a great educational experience. If you have any questions or comments I would love to hear them!
I think Poolable is an interface would be better. When I want to pool a class, I can just make it implement this interface. Otherwise, sometimes a Class already inherits from a class, If I want it to to be pollable I must change its base class.
You’re thinking in a different architectural mindset which is less ideal for Unity development. Your objects don’t need to inherit from anything to work with my system and if you are inheriting from Poolable at all you are using it wrong. Instead of inheritance you should be thinking about “object composition” – what components does the object have? If it has a Poolable component it can be pooled.
Also note that you don’t even directly add the Poolable component. It gets added automatically by the controller object when you ask it to make a type of object pooled.
I got it! Taking it as a component indeed makes more sense in Unity! Thanks!
When I ‘Dequeue’ more than the max count, it creates the prefabs at the root of the scene, not under the PoolController GameObject where PoolController is an empty GameObject with the GameObjectPoolController.cs script attached. Is this intentional or can you tell me what I’m doing wrong? I’m using the code from this page as is.
Also, these posts are awesome, please keep it up! Thank you!
Yes, the behavior is intentional. The “max” count is the maximum number of objects that the controller will keep alive. Remember that because the GameObject that it is on will survive scene changes, so will any child GameObjects. I decided that I would like to allow the controller to create more than the max number of instances I had specified just in case there were unexpected use-case scenarios, but at the same time, I wanted those extra objects to be able to be destroyed when changing scenes, etc so that memory pressure could be reduced. Also note that if you try to Enqueue beyond the max count of instances that it will actually destroy them instead of tracking them.
This is a great post. I think the Live Training is geared for users with little to no experience and designed to quickly introduce concepts with immediate results. My introduction to programming was stumbling upon one of the Live Training videos and I was instantly hooked. While it’s an excellent marketing tool, as a learning resource it leaves a lot to be desired. They will often mention that there are better ways and though I found it quite frustrating it did point me in the right direction and keep me motivated.
With what I learned from the Unity site and a little youtube, I nearly completed a fairly robust game before I realized what a tangled mess I had created. I had defiantly taken on way more than anyone should for their first game but I had passed the point of no return before realizing. It seems like there’s not really a middle ground between beginner and advanced programming. This blog is the best bridge I’ve found and I really like the way you expanded on the training that I already understood. So I’d like to ask a related question.
I have been using Unitys serialization with binary formater and a singleton pattern for perstistance based on what I learned from the Live Training. It works but is lacking. I’ve been experimenting with scriptable objects and trying to figure out what/why/how JSON. You’ve already cleared up a lot for me but I’m still unsure of how to proceed. Would it be a bad Idea to use a combination of the three? My limited understanding of serialization says go for it while my advanced understanding of my limited understanding predicts issues. I’m mainly worried about JSON strings causing massive memory allocations if I use it to store all of my data. From what I see in the RPGTactics project, you’re not implementing it?
Anyway I apologize for my rambling and I really appreciate the well thought out architecture of the overall project as well as the individual lessons and clear explaintions (without having to watch a couple hours of videos to understand the why of the first ten minutes).
Hey Eric, it sounds like you’ve started touching on a lot of potentially large topics here. Before I dive in too deep, have you already had a chance to look at the post I did on saving data? If not, it will help explain some of the what, why, and how of using JSON. I have used JSON successfully on a number of projects, and appreciate its flexibility, but depending on how and when you use it memory and speed could be an issue.
Regarding the Tactics RPG project, I haven’t done anything to save data yet, but by that I mean the dynamically created game data which is made as a result of actually playing the game. Level data, unit recipes, etc which should be the same for everyone are able to be persisted as I develop the project thanks to Unity’s serialization in Scriptable Objects and prefabs etc.
I’ve read every post here a at least a half dozen times and some more. These are the topics I’ve spent the last couple of months trying to discover. I didn’t know what some even were and others I had read about but couldn’t grasp until seeing them implemented and explained clearly. Some I got and implemented, in a barbaric fashion. Thanks to your help and what I have learned here, I now understand the implementation of ideas that made sense but I was unable to put to use. I’ve been able to use scriptable objects(where I realized the depth of my serialization ignorance) in a couple of other scenarios in my project and refactor a stat system that I had tried earlier to better suit my needs plus SO mush more. I feel like I’ve spent months stuck in a traffic jam and you just showed me the fly button hidden under my dash board. Now if I can only land safely…
Wow, haha, I am glad you are getting so much use out of everything. Since you’ve already read my post on Saving Data, then you should be at a pretty good point to make informed decisions. There isn’t anything wrong with using the Singleton data manager and binary serialization as well as JSON (those are the mix of three you meant right?)
Ultimately there are lots of reasons you might favor one system over another and the decision should be made based on whichever reasons are most important to you at the time. For example, Binary serialization is a little easier to implement. It also has the benefit of being less modifiable by “normal” users who might want to cheat in your game. If you chose to use binary one thing you will want to make sure you think about is how to version your data. For example, how hard will it be on you if you try to add, remove or rename a variable, or worse, keep the same named variable but change its data type. Before shipping a game its not that big of a deal, just wipe any saved data and start over. After a game has shipped you have to migrate the data. These kinds of scenarios can lead to crashes and or loss of data depending on how you architect it.
JSON serialization is great because you can easily read and even modify the saved results and it can be a handy debug and testing feature during development time. There is a general rule that you should not pre-maturely optimize so I wouldn’t avoid JSON simply for fear of memory allocations. Most of the memory allocations will be native C# objects which the garbage collector can do a good job of reclaiming anyway. Plus, even when you are ready to ship, it isn’t hard to encrypt the output string and or simply write it to a file with binary just the same as you had done before. Versioning is much easier because you have full control over the serialization process. You can check for the presence of dictionary keys to see what variables have been saved, as well as use error handling around the loading process itself.
It’s hard to give specific answers to general questions, but hopefully this helps?
Yes, exactly what I was after, along with some additional insight. If I understand correctly then I think I was on the right track.. Using JSON for different character data and binary to encrypt it and to keep profile data along with shared between characters.
Versioning is something I haven’t really looked into yet. I’ve been making asset bundles from each system as I manage to decouple them and a sort of master project bundle to use while I experiment. I was planning on using github as soon as I had everything separated and a solid foundation built so knew I wouldn’t need to change the majority of the project each time.
I know how frustrating it can be to try to answer questions that someone doesn’t quite know how to ask and I thank you very much for taking the time to help!
Great, I am glad to help.
Also just in case you didn’t know, git has a 1 GB repo size limit so be careful how many revisions of binary art assets you include. I’ve worked on several “small” projects which overcome that limit a lot faster than I would expect. Otherwise I really like it, and I do like keeping my art and code together whenever possible.
Good luck on your project!
Nice post. I really liked your clean pooling implementation (as well as the accompanying analysis), though I do have one minor gripe:
Unity can’t serialize Dictionary!
While it’s not that big a deal and easy to replace with, say, serializable Lists, note that you won’t be able to do things like ‘hot-reloading’ (i.e. recompile code while still in playmode in the editor) without a bit of hassle (though it’s not entirely impossible if you really need to use a Dictionary for lookup for whatever reasons).
Hint: To emulate a Dictionary, you could always store a List for keys and a List for values, and sew them up in a non serialized dictionary on Awake() and/or OnEnable(). (This doesn’t solve the problem of when you want to modify the dictionary and have it “saved” back, but it is a handy trick in a lot of other cases.)
Hot-reloading is an interesting idea in theory, but in practice I can’t imagine any “real” project being compatible. There are simply too many things which can’t survive the serialization process. I would rather take advantage of things like dictionaries, object references, static classes, etc for the sake of ease of architecture over the minor advantages of hot-reloading any day.
Out of curiosity, just how “large” of a project have you managed to create which still works with it?
I’m mostly new to Unity, so most of my “projects” have been relatively small experiments and games used to learn Unity and determine best practices. I’ve found hot-reloading to be pretty handy when prototyping, but yes, it is touchy, so I can see how large complex projects with many team members have no real hope of being serialized live.
Lately though I’ve discovered that recompiling calls OnDisable when invoked, then OnEnable when it’s completed, so now I’m wondering how complex a project can get and still be able to hot-reload if one strictly uses serializable types and is very careful where all initialization and destruction takes place.
But yeah, I’ve been a programmer for awhile now, and as you say, I too prefer clean, efficient, readable code and architecture over most things that would sacrifice it. I just thought I’d bring it up for discussion’s sake. Cheers.
A friendly reader on Reddit pointed out some tips I felt were worth sharing. First, he said that by definition, the “search list” Mike implemented should be referred to as O(n) not O(n/2) as I allowed. It is a “linear” search of an unsorted list just like the examples listed on the link I shared on Big-O notation. It doesn’t matter that sometimes it might return more quickly.
In addition, he created another pool called “EmptyPool” and measured its time to around 20ms:
public class EmptyPool : BasePool
{
PoolMe pm;
public EmptyPool(int count)
{
pm = new PoolMe();
}
public override PoolMe GetPooledObject()
{
return pm;
}
public override void ReturnPooledObject(PoolMe obj)
{
obj.isPooled = true;
}
}
He suggested that we could use the time from this implementation to remove the overhead of my implementation which would give a better comparison of the difference between searching a list and using a Queue. In other words, you could treat the completion times as:
SearchPool = (128ms – 20ms) = 108ms
QueuePool = (31ms – 20ms) = 11ms
Which would mean that the Queue would be even more than 4 times faster – around 10 times as fast. Great tips, thanks patiencer!