Multithreading in iOS-Part 2/4


Instead of writing code block under specific queue. You can have a block of code which can be executed on any thread

A DispatchWorkItem is a block of code that can be dispatched on any queue and therefore the contained code to be executed on a background, or the main thread. it can be written as follows

Let’s see a small example to understand how DispatchWorkItem objects are used.

Instead of perform() we can execute workItem immediately as follows

How can we cancel a work item:-

One reason why we need to use an explicit DispatchWorkItem because If we need to cancel the task before or during execution. It can be achieved by calling cancel() on the work item one of two actions will be performed:

  1. If the task has not yet started on the queue, it will be removed.
  2. If the task is currently executing, the isCancelled property will be set to true


With dispatch groups we can group together multiple tasks and either wait for them to be completed or be notified once they are complete. Tasks can be asynchronous or synchronous and can even run on different queues.

Dispatch groups are managed by DispatchGroup object.

Bellow are the steps to create DispatchGroup

  1. Create a new dispatch group.
  2. Call enter() to manually notify the group that a task has started. You must balance out the number of enter() calls with the number of leave() calls else your app will crash.
  3. You call wait() to block the current thread while waiting for tasks completion. You can use wait(timeout:) to specify a timeout and bail out on waiting after a specified time.
  4. Instead ofwait() or wait(timeout:) you notify the group that this work is done.
  5. At this point, you are guaranteed that all tasks have either completed or timed out. You then make a call back to the main queue to run your completion closure.
1. wait with time out 2. output

if you don’t want to wait for the groups to finish, but instead want to run a function once all the tasks have completed, use the notify function in place of the group.wait()

In which situations you can use DispatchGroup?

some use cases where you can use DispatchGroup are:

  • When you need to run two distinct network calls. Only after they both have returned you have the necessary data to parse the responses.
  • When you need to add animation as well as some network api call or database update. After these two work you need to show message and many more situations like this.

bellow code is an example for executing workitem in different queue with dispatchgroup.

different syntax used here we can follow anything to execute code

How can you Delay the task Execution:-

DispatchQueue allows you to delay task execution. Care should be taken not to use this to solve race conditions or other timing bugs through hacks like introducing delays. Use this when you want a task to run at a specific time.

We use the seconds method, but besides that the following ones are also provided:

  • microseconds
  • milliseconds
  • nanoseconds

How can we get Current Queue Name?

API provides us a function to check whether a call is in the main thread or not. But there is no way to get current queue name.


Thread safe code can be called safely from multiple threads without causing any problem such as data corruption or app crash. When we use singleton or code that is not thread safe then we face problem like data corruption. We can avoid this issue by using dispatch barrier.

How DispatchBarrier works?

It allows us to create a synchronisation point with in a concurrent dispatch queue what is that mean consider a concurrent queue, which executes task concurrently in normal operation but when the barrier is executing, it acts as a serial queue. After the barrier finishes, the queue goes back to being a normal concurrent queue.

GCD will take care of which block of code are submitted to the queue before barrier call. Any blocks that are submitted to the queue after barrier block will not be executed until barrier block completes its execution. After barrier block completes returns immediately and execute this block asynchronously.

for singleton class, we initialise the singleton only once and swift initializes static variables when they are first accessed, and it guarantees initialization is atomic i.e thread safe. so no need to worry when we are initialising singleton class

More concern is when we are reading and writing shared resource from multiple thread. During this time if one thread is reading shared resource and other thread is trying to write then we might end up with data corruption. This can be solved by using DispatchBarrier

Technically when we submit a DispatchWorkItem or block to a dispatch queue, we set a flag to indicate that it should be the only item executed on the specified queue for that particular time. All items submitted to the queue prior to the dispatch barrier must complete before this DispatchWorkItem will execute. When the barrier is executed it is the only one task being executed and the queue does not execute any other tasks during that time. Once a barrier is finished, the queue returns to its default behaviour.

with dispatchBarrier

when you would and wouldn’t use barrier functions:-

  • Custom Serial Queue: A bad choice here. barriers won’t do anything helpful since a serial queue executes one operation at a time anyway.
  • Global Concurrent Queue: Use caution here. This probably isn’t the best idea because other systems might be using the queues and it will block that execution until barrier code completes its execution.
  • Custom Concurrent Queue: This is a great choice for atomic or critical areas of code. Anything you’re setting or instantiating that needs to be thread-safe is a great candidate for a barrier.


In multithread, threads must wait for exclusive access to a resource. this is the one way to make threads wait and put them to sleep inside the kernel so that they no longer take any CPU time.

It gives us the ability to control access to shared resource by multiple threads.

A semaphore consist of a threads queue and a counter value (type Int).

Threads queue is used by the semaphore to keep track on waiting threads in FIFO order.

Counter value is used by the semaphore to decide if a thread should get access to a shared resource or not. The counter value changes when we call signal() or wait() functions.

when should we call wait() and signal() functions?

  • call wait() each time before using the shared resource. Here we are asking semaphore whether shared resource is available or not
  • call signal() each time after using the shared resource. Signalling the semaphore that we are done interacting with the shared resource.

Calling wait() will do the following work:-

  • Decrement semaphore counter by 1.
  • If the resulting value is less than zero, thread is blocked and will go into waiting state.
  • If the resulting value is equal or bigger than zero, code will get executed without waiting.

Calling signal() will do the following work:-

  • Increment semaphore counter by 1.
  • If the previous value was less than zero, this function unblock the thread currently waiting in the thread queue.
  • If the previous value is equal or bigger than zero, it means thread queue is empty that means no one is waiting.

never run semaphore wait() function on main thread as it will freeze your app.

Wait() function allows us to specify a timeout. Once timeout reached, wait will finish regardless semaphore count value.

What is shared resource?

A shared resource can be a variable, or a task such as downloading an image from url, reading from a database etc.

A semaphore can be used in any case where you have a resource that can be accessed by at most N threads at the same time. You set the semaphore’s initial value to N and then the first N threads that wait on it are not blocked but the next thread has to wait until one of the first N threads sends signal to the semaphore. The simplest case is N = 1. In that case, the semaphore behaves like a mutex lock.

without semaphore. when run this code each and every time we get different result
with semaphore- we always get result as [0, 1, 2, 3,4, 5, 6, 7, 8, 9, 10]


The DispatchSource contains a series of objects that are capable of monitoring OS-related events.

Dispatch Sources are a convenient way to handle system level asynchronous events like kernel signals or system, file and socket related events using event handlers.

Dispatch sources can be used to monitor the following types of system events:

  • Timer Dispatch Sources (DispatchSourceTimer): Used to generate periodic notifications.
  • Signal Dispatch Sources (DispatchSourceSignal): Used to handle UNIX signals.
  • Memory Dispatch Sources (DispatchSourceMemoryPressure): Used to register for notifications related to the memory usage status .
  • Descriptor Dispatch Sources (DispatchSourceFileSystemObject, DispatchSourceRead, DispatchSourceWrite): Descriptor sources sends notifications related to a various file- and socket-based operations, such as:
  1. signal when data is available for reading
  2. signal when it is possible to write data
  3. files delete, move, or rename
  4. files meta information change

This enables us to easily build developer tools that have “live editing” features.

  • Process dispatch sources (DispatchSourceProcess): Used to monitor external process for some events related to their execution state. Process-related events, such as
  1. a process exits
  2. a process issues a fork or exec type of call
  3. a signal is delivered to the process.

Let’s see some use cases:

1. DispatchSourceTimer use case

  • Timer runs on main thread which needs main run loop to execute. If you want to execute Timer on background thread, you can’t because Timer requires an active run loop which is not always readily available on background queues.
  • In this situation DispatchSourceTimer could be used. A dispatch timer source, fires an event when the time interval has been completed, which then fires a pre-set callback all on the same queue.

There are some issue with DispatchSourceTimer due to bug in LIBDISPATCH i.e if you call resume()/suspend() method on timer which is already resume()/suspend() then app will crash

app will crash when your deallocating timer so better to create a wrapper class on top of Dispatch timer to fix these issue.

This timer will automatically fire events on a background queue. The reasoning for this is that DispatchSource.makeTimerSource() creates a timer that uses a default background operation queue

2. File logging in Swift (DispatchSourceFileSystemObject, DispatchSourceRead, DispatchSourceWrite) use case:

Every app will print debug logs to the developer console and its good practice to save these logs somewhere. OSLog automatically saves your logs to the system but developer can also save these logs somewhere.

Why developer wants to save these logs and what is the use from it?

If your app receives a bug report from an external beta tester, you can easily find and inspect your own log file instead of teaching that user how to extract and send their OSLogs. this can be achieved by using DispatchSources.


Enjoy your coding. I hope you learnt something from this blog. Please hit the clap button below 👏 to help others find it!. follow me on Medium.




iOS Developer in walmart

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

How to use aws Secrets Manager with Python

Overview Of Prototype Desing Pattern

5 Web Crawling Habits You Should Adopt.

Pool-X Will Launch Polkadot (DOT) Staking & Soft Staking

Value Objects, Barbara and Design Patterns

Python Is The Sure Way To Make Money (Part 3)

Snowplow to Segment

A Tale of Stack in Slim

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Manasa M P

Manasa M P

iOS Developer in walmart

More from Medium

Our Developers Can Help You to Build a Dating App

The Basics of Swift Package Manager

Test Double : Stub