Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
✇DamoBird365

AV Block, Amioderone and Pacemakers

Please note that I do not have a medical background. However, I have firsthand experience with Arrhythmias (abnormal heart rhythms), and I aim to share my personal experience through this blog post and others to shed light on the challenges of living with heart disease.

Atrioventricular Block

I was recently admitted to A&E after a faint and subsequently diagnosed with atrioventricular block. AV block refers to a condition where there is a delay or interruption in the transmission of electrical signals from the upper chambers (atria) to the lower chambers (ventricles) of the heart. This can result in a slower heart rate or even a complete block of electrical impulses, which can be dangerous and lead to symptoms such as dizziness, fainting, or heart failure. AV block and tachycardia are not directly related, and their treatments can be different depending on the underlying cause. I already had an ICD implanted in April 2020 after a suspected cardiac arrest caused by ventrical tachycardia (ventricular tachycardia is a fast heart rate that originates in the ventricles of the heart), whilst out riding my bike The unexpected diagnosis – ARVC and this was extracted and replaced by an S-ICD in February 2023 Living with ICD complications.

Amioderone

In late November 2022, I was prescribed amiodarone to treat my irregular heart rhythm, specifically to reduce my ectopic burden of PVCs (premature ventricular contractions) in preperation for my S-ICD installation in February 2023. My PVCs are estimated to make up more than 30% of my heartbeats, meaning that over 3 in every 10 beats are ectopic. However, it’s important to note that in some cases, amiodarone can actually cause or worsen a condition called atrioventricular block. This occurs because the medication affects the heart’s conduction system, which can cause delays or interruptions in the transmission of electrical signals, leading to AV block becoming more prominent or apparent.

Its potency stems from its ability to block multiple types of ion channels in the heart, which helps to stabilize the heart’s electrical activity and prevent abnormal rhythms. Additionally, amiodarone has a long half-life, meaning that it remains in the body for an extended period of time, allowing for sustained therapeutic effects. However, it can also result in significant side effects. The potency of amiodarone requires close monitoring by a doctor to ensure that the medication is effective while minimizing the risk of side effects.

The half-life of amiodarone can range from 20 to 100 days, depending on various factors such as age, gender, and overall health. This means that it can take several weeks or even months for the body to eliminate half of the dose of amiodarone taken.

Some of the side effects included but are not limited to:

Low RiskMedium RiskHigh Risk
Blurred visionHypotensionAbnormal hepatic function tests
Corneal depositsNauseaBradycardia
Dermatological reactionConstipationHeart block
Gastrointestinal signs and symptomsCoughInvoluntary body movements
PhotophobiaDizzinessNeurotoxicity
PhototoxicityDyspneaParesthesia
Skin photosensitivityFatigueSolar dermatitis
Visual halos around lightsAbnormal gaitTremor
XerophthalmiaAstheniaVisual disturbance
MalaiseBlue-gray skin pigmentation
Persistent ventricular tachycardiaHypersensitivity pneumonitis
PneumonitisInterstitial pneumonitis
Pulmonary toxicity
Pulmonary alveolitis

Leadless Pacemaker

A leadless pacemaker is a small device that is used to regulate the heartbeat. It is about the size of a large vitamin capsule, with a weight of only 2 grams.

This pacemaker is designed to be placed inside the heart, so there are no wires that need to be threaded through the veins like traditional pacemakers and that of the ICD that caused my SVC obstruction (Living with ICD complications) as a result of lead installation. Instead this pacemaker is inserted into the heart via a small incision in the upper leg and guided into place through the femoral vein. Once in place, the pacemaker uses electrical impulses to regulate the heartbeat, just like a traditional pacemaker. The pacemaker can last for up to 14 years, and can be monitored remotely by healthcare professionals to ensure that it is working properly, albeit this consumes the battery faster.

When a pacemaker is installed, the settings can be adjusted to meet the specific needs of the patient. One common setting is the minimum ventricular rate, which determines the lowest rate at which the pacemaker will stimulate the heart to beat. In my case, the pacemaker is set to maintain a minimum ventricular rate of 40 beats per minute.

Another important setting in pacemakers is called AV synchrony, which will address the AV block symptoms I have experienced. AV synchrony ensures that the atria (the top chambers of the heart) and ventricles (the bottom chambers of the heart) work together properly. Specifically, it ensures that the ventricles contract after the atria have finished contracting, which helps to optimize the blood flow through the heart.

In addition to these basic functions, modern pacemakers have a variety of advanced features. For example, they can automatically adjust the pacing rate based on the patient’s activity level or other physiological signals. In trial at the moment is a pacemaker that can communicate with other implanted devices, such as defibrillators (S-ICD), to coordinate their actions and optimize the patient’s heart function. If you want to learn about this new modular technology and see how the device is implanted, take a look here Boston Scientific Emblem Modular S-ICD / Leadless Pacemaker.

The post AV Block, Amioderone and Pacemakers first appeared on DamoBird365.

✇DamoBird365

Encryption in Power Automate

This is actually a really good demonstration of efficiency in Power Automate. My son asked me the question, can you encrypt and decrypt a string using a key and Viginere Cipher? They’ve just covered this technique in school. In this post, not only do I demo how it can be done, I also show how it can be done without an Apply to Each. There is also a live version of the solution allowing you to encrypt and decrypt your own strings using your own code word.

What is the Viginere Cipher?

The Vigenere cipher was invented by Blaise de Vigenere in the 16th century. It is a method of encrypting alphabetic text by using a series of interwoven Caesar ciphers, with each letter of the plaintext being shifted based on a keyword.

To encrypt a message using the Vigenere cipher, a keyword or phrase is chosen, and each letter of the message is shifted according to the corresponding letter of the keyword. For example, if the keyword is “LEMON” and the plaintext is “HELLO”, the first letter of the plaintext is shifted by “L”, the second letter by “E”, the third letter by “M”, the fourth letter by “O”, and the fifth letter by “N”. The resulting ciphertext would be “RIJVS”.

To decrypt a message using the Vigenere cipher, the recipient would need to know the keyword that was used to encrypt the message. The recipient would then use the same keyword to shift each letter of the ciphertext back to its original plaintext letter.

The Vigenere cipher was considered unbreakable for many years, until Charles Babbage and Friedrich Kasiski independently discovered a method for breaking it in the mid-19th century. This was done by identifying repeated patterns in the ciphertext, which allowed them to determine the length of the keyword. Once the length of the keyword was known, the cipher could be broken using frequency analysis.

Despite its vulnerability to modern decryption methods, the Vigenere cipher remains an important historical encryption technique and is still used in some limited applications today.

Trigger the Flow from Microsoft Forms

This seemed like the obvious way to demonstrate this and allow you to encrypt and decrypt your own strings using the Power Platform. Remember, DO NOT SHARE ANYTHING CONFIDENTIAL, including the code word. In order to receive your encrypted/decrypted string, you must supply a valid email address or Power Automate won’t be able to email you with the supplied answer.

The Power Automate Flow

The flow consists of a trigger for the Form, followed by get response details and then I have a scope containing all of the necessary actions. Some of these actions could be combined with more complex expressions but I decided to keep them seperate as they don’t effect performance. The flow runs in under 1 second.

Note that in my implementation, spaces, numerical and symbols are included in the shift using a secret but remain the same and do not change. Similar implementations online will remove spaces and/or numbers.

Initially I created a grid array of the letters and the shifted patterns, 26 of each, but then I realised it was easier to get the flow to simply create a string from the code word that repeated up until the length of the string to be encrypted/decrypted, work out the index of these letters and then work out the index of the letters for the string. With these indexes it is then possible to add on the shift, use modulus to get the remainder on string length 26 and return the new character based on the calculated shift for each letter using the code word.

And this is what the Flow looks like:

Up until this point, the index shifts are being calculated in the form of arrays
Vigenere Cipher in Power Automate
Then based on the Encrypt/Decrypt parameter, the offsets are calculated and a new string is returned via email

In More Detail

I will now explain the actions in more detail as follows:

ComposeAlphabet: is used to define the letters that we are looking to shift and it will also allow us to calculate the index of the letter based on their position.

ComposeAlphabetArray: using expression chunk(outputs(‘ComposeAlphabet’), 1) an array is created of all the letters.

ComposeSecret: the secret code word from my Microsoft Form.

SelectSplitSecretAndFindLetterNo: Two expressions in this one, the first is chunk(outputs(‘ComposeSecret’), 1) and this is so that our secret string is now an array of letters. The second expression in the Map (text mode) indexOf(outputs(‘ComposeAlphabet’), item()) is used to calculate the index of the secret word letters based on the opening alphabet string. A=0, B=1 … Z=25. As objects in an array can be selected by integer indexes, this works nicely for our efficient script (demos of this on my YouTube videos)

ComposeString: is the string to be encrypted or decrypted from the Microsoft Form.

SelectSecretSequence: Two expressions here, the first range(0, length(outputs(‘ComposeString’))) creates an array based on the length of the string starting from 0. We can then use this to our advantage in the map so that we can repeat our code word for the length of the string. I.e. if codeword is Cat and string is Damien, the sequence would be CatCat. The second expression is body(‘SelectSplitSecretAndFindLetterNo’)?[mod(item(), length(outputs(‘ComposeSecret’)))] and it is selecting the letter from the secret based on the modulus i.e. remainder. If the code word is 3 in length, 0 mod 3 is 0, 1 mod 3 is 1, 4 mod 3 is 1. Using the result we can retrieve the letter from the secret word array based on integer index [0] [1] [2] etc.

ComposeStringArray: creating an array of letters with chunk(outputs(‘ComposeString’), 1)

SelectStringSequence: two expressions here, the first range(0, length(outputs(‘ComposeString’))) is based on an array of numbers, starting from 0 and of size based on the length of the string. The second in the map is indexof(outputs(‘ComposeAlphabet’), outputs(‘ComposeStringArray’)?[item()]) where we are returning the index of each letter in the string based on our alphabet string. This gives us an array of indexes.

Condition: the purpose of this is to branch yes or no based on the the string Encrypt or not, i.e. Decrypt, so that the appropriate logic is applied to the indexes that have been calculated above. For a Encrypt they are added for a decrypt the process is reversed.

Encrypt

SelectOffsetEncrypt: two expressions the from is based on range(0, length(outputs(‘ComposeString’))) which we have seen multiple times now, the map is mod(add(body(‘SelectSecretSequence’)?[item()], body(‘SelectStringSequence’)?[item()]), 26) where we are adding the secret sequence index and the string sequence index, using mod 26 to return the remainder and this becomes are new letter index. This creates an array of new letter indexes.

SelectEncryptedLetters: two expressions, again the trusty range(0, length(outputs(‘ComposeString’))) as the from, with if(equals(body(‘SelectStringSequence’)?[item()], -1), outputs(‘ComposeStringArray’)?[item()], outputs(‘ComposeAlphabetArray’)?[body(‘SelectOffsetEncrypt’)?[item()]]) as the map. This is checking to see if the sequence equals -1, in which case indexof could not find the letter in the previous select actions and we are most likely dealing with a number or symbol. If it equals 1, return the original number or symbol from the string array, otherwise return the letter from the alphabet array based on the calculated offset. For both of these expressions we use an integer index, allowing us to select the letter based on position generated by range(). The result of this action is an array of encrypted letters based on the calculated offsets.

Decrypt

SelectOffsetDecrypt: is based on two expressions, from is based on the same range expression we’ve seen before and the map is mod(add(Sub(26, body(‘SelectSecretSequence’)?[item()]), body(‘SelectStringSequence’)?[item()]), 26). Here we subtract the secret letter sequence index from 26 and add on the string letter sequence index before using mod to extract the remainder. This allows us to reverse the original encryption based on the calculated indexes.

SelectDecryptedLetters: is based on two expressions, from is, you guessed it, the same range expression to calculate those indexes and then map is similar to the encrypt, albeit using the indexes from the SelectOffsetDecrypt array if(equals(body(‘SelectStringSequence’)?[item()], -1), outputs(‘ComposeStringArray’)?[item()], outputs(‘ComposeAlphabetArray’)?[body(‘SelectOffsetDecrypt’)?[item()]]). The result of this action is an array of decrypted letters based on the calculated offsets.

Compose: because the logic could branch yes or no depending on if the user has selected encrypt or decrypt, we need to get the result from above that isn’t null. We can do this using a coalesce which returns the first non null value. The expression join(coalesce (body(‘SelectEncryptedLetters’),body(‘SelectDecryptedLetters’)),”) will first check for the first non null array from either encrypt or decrypt and then join the array of letters to form a newly encrypted or decrypted string.

Send an email: Based on the email addressed supplied to the form, I generate and send an email and append “ed” onto the word encrypt/decrypt to provide your new string.

Happy encrypting / decrypting! Some videos that cover some of these techniques can be watched below:

The post Encryption in Power Automate first appeared on DamoBird365.

✇DamoBird365

Service Health and Message Centre via Power Automate

In this blog post, we’ll explain how to use Power Automate to monitor your Microsoft 365 service health and admin message center. With this method, you’ll have more control and customization options for your organization’s services.

We’ll start by creating an Azure app registration and using the Graph API to bring data into Power Automate. From there, you can save this information to a SharePoint List, publish it to your organization, build a Power App, or set alerts for degraded service health via Teams or email.

By using this method, you’ll have a more efficient and effective way to monitor and manage your Microsoft 365 services.

SharePoint List, Gallery View of Service Status
SharePoint List, Gallery View of Service Status via HealthOverview


Azure App Registration

In order to grant access to the Graph API, you will need either a delegated or application app registration. I have chosen an App registration to avoid having to tie this to a specific user account.

Below is a step by step guide which should enable you to setup the app registration, generate and gather a client secret value, obtain the client id, oauth endpoint and of course grant the necessary permissions to allow access to the Service Health and Communications Graph API.

  1. Open the Azure portal https://portal.azure.com and sign in with your Azure account.
  2. Select “App registrations” and click on “New registration”.
  3. Give your app a name and select the appropriate account type (e.g. single tenant or multi-tenant).
  4. Click on “Register”.
  5. Once the app is registered, open the “Certificates & secrets” section.
  6. Click on “New client secret” and enter a description.
  7. Select the expiration period and click “Add”.
  8. Copy the newly created client secret value and save it in a secure location.
  9. Open the “API permissions” section and click on “Add a permission”.
  10. Select “Microsoft Graph” and choose the desired permissions (e.g. ServiceHealth.Read.All & ServiceMessage.Read.All – Application Permissions).
  11. Click on “Add permissions”.
  12. Click on the “Grant admin consent” button to grant the app the necessary permissions.
  13. Now your app is registered in Azure and ready to be used with the Microsoft Graph API.
  14. Open the “Overview” section and copy the “Application (client ID)”
  15. Click on Endpoints and copy the “OAuth 2.0 token endpoint (v2)” endpoint

Testing & creating Graph queries using Graph Explorer

Via the Graph Explorer it is very easy to build and test rest api calls to the Graph API. For dervice message we can use https://graph.microsoft.com/v1.0/admin/serviceAnnouncement/messages to pull back all of the familiar data from the Microsoft 365 Admin Message Center (details here) and then there is https://graph.microsoft.com/v1.0/admin/serviceAnnouncement/healthOverviews to pull back data on general Platform health, as well as obtaining specific details about the platform of your choice, including updates and links as seen in the Microsoft 365 Admin Service Heatlh (details here). Note you have plenty of other options to suite your requirements, including archiving and marking messages as read.

When these endpoints are combined with Odata filters, a select or orderby parameter, we can really tailor the data received back from the Rest API. More details of where can be read up on here.

For the example in this post I am going to filter the service announcement messages for the following services Microsoft 365, Planner, Microsoft Power Automate, Power BI, Power Apps, Microsoft Forms; ensure that we only retrieve messages from 1st January 2023 onwards and order these messages by the start date time. Also given the data returned is quite expanse, I am use select to retrieve specific fields, mainly id, category, severity, tags, title, category, startdatetime, enddatetime, services, details, body. This allows me to limit and be selective about the data returned back to Power Automate.

A sample Graph API query for this data would be as follows:

https://graph.microsoft.com/v1.0/admin/serviceAnnouncement/messages?$select=id,category,severity,tags,title,category,startdatetime,enddatetime,services,details,body&$filter=startDateTime ge 2023-01-01 and services/any(p:p in ('Microsoft 365', 'Planner', 'Microsoft Power Automate', 'Power BI', 'Power Apps', 'Microsoft Forms'))&$orderby=startDateTime+desc
Graph Explorer API Call
Graph Explorer

Setting up Power Automate for Authentication

In order to authenticate with our Azure App registration via Power Automate, we can use the HTTP action or build a custom connector. For the example today, I have used the HTTP action to obtain a bearer token and then a subsequent one to query the Graph API endpoint, the bearer token is valid for 60 minutes and so it can be used as many times as you wish during your flow run, assuming it finishes in under 60 minutes.

Setup of the HTTP Post to obtain a token is farily straightforward, albeit I always have to reach for google to remind myself how to do it. About time I wrote a blog post to remind myself. The Method is of course POST, the URI is based on the Endpoint copied from your App registration (Step 15 above) and will include the GUID for your tenant. Using the client id and client secret in your body, you should be able to get a response back and then use the access token value from the body in the next step.

//Header
{
  "Content-Type": "application/x-www-form-urlencoded"
}

//Body
client_id=['client_id']&client_secret['client_secret']&grant_type=client_credentials&scope=https://graph.microsoft.com/.default
HTTP action to obtain a bearer token from Azure App Registration
HTTP action to obtain a bearer token from Azure App Registration

Get Service Announcements via HTTP

In order to retrieve the service announcements, we can construct an HTTP action to https://graph.microsoft.com/v1.0/admin/serviceAnnouncement/messages and pass the Odata parameters as defined and tested earlier in Graph Explorer. It is worth noting that you cannot pass these parameters directly in the URI. They must be included in the Queries parameters as seen below.

HTTP Action to obtain the Service Announcements
HTTP Action to obtain the Service Announcements
{
  "$filter": "startDateTime ge @{startOfMonth(utcnow(),'yyyy-MM-dd')} and services/any(p:p in ('Microsoft 365', 'Planner', 'Microsoft Power Automate', 'Power BI', 'Power Apps', 'Microsoft Forms'))",
  "$select": "id,category,severity,tags,title,category,startdatetime,enddatetime,services,details,body",
  "$orderby": "startDateTime desc"
}

If you’ve watched my YouTube, you will know that I love my expressions. If you’re wondering how to get the Access Token, the expression is body(‘HTTP_Get_Token’)?[‘access_token’] where HTTP_Get_Token is the name of the HTTP action. You must prefix this with the word bearer and a space, Bearer AccessToken. If after setting this up, you are getting strange errors but are getting a bearer token in your first HTTP action, an incredibly useful site for decoding these tokens is here and it will determine the permissions that have been granted by the authentication. Worth checking you have been granted the Graph Application Permissions via the Azure App Registration.

Repurposing the data

Using a Select Action we can repurpose the data from the value array returned by the HTTP Action. I have typed these expressions manually based on item()?[‘nameofkey’]. Note that for the services value, an array is returned by default. I have therefore created a string from the array by using join(). The expression would therefore be join(item()?[‘services’],’, ‘).

Select data from HTTP Post
Repurpose data from HTTP Action

With the data repurposed, it can be passed to the Create HTML Table as input and of course, you probably know the rest at this point. Many uses with the data in an array, send a teams message, build a dynamic adaptive card, add items to a list or dataverse or use the data directly in a Power App.

Sample HTML Table
HTML Table of Service Announcements

Get Health Overview via HTTP

Very much like the action above, you can also query the HealthOverview endpoint and get a very compact array of service name and status. If you liked the opening images on this post, I used this data to update a SharePoint list and display it in Gallery Mode. Remember that this data is specific to your tenant and so it is incredily useful for making organisation wide announcements should there be any lengthy downtime. How much time could you save for your front line IT helpdesk if end users could check the status of a service using a Virtual Agent or IT SharePoint Communication Site?

HTTP action to get health overview.

I also experimented with Queries to include an $expand on issues. This enabled me to retrieve far more detailed information about each platform. As well as the current status, it was possible to retrieve the most recent update on a Platform degredation, details of which were in a nested array.

example of using $expand on issues
Example of using $expand on issues

By combing this data with a sort() and reverse() expression it is possible to sort the updated data by date descending and save the most recent update. Again this could be potentially useful information for end users or front line IT. Below you can see an example of the potential data that can be captured.

Sample data about Platform Service Health
Sample data about Platform Service Health

Hope you enjoyed this, please let me know how your org has used this to solve a problem and don’t forget to check out my YouTube 👍

The post Service Health and Message Centre via Power Automate first appeared on DamoBird365.

✇DamoBird365

Living with ICD complications

An implantable cardioverter-defibrillator (ICD) is a small device that is placed under the skin of the chest to monitor and treat heart rhythm problems. It is often used in people who have a high risk of sudden cardiac arrest. I survived a suspected cardiac arrest in 2020 and crashed my bike. An ICD was implanted as my safety net. Apart from greatly reducing my exercise levels due to the nature of my diagnosis, ARVC, I was living a relatively normal life, I even took part in Caledonia Etape on an Ebike in September 2021 until I started noticing my neck swell and jugular veins pop out when I bent over, went swimming, lightly exerted, vacuumed or stirred a sauce for dinner.


The risk of SVC obstruction

While an ICD can be life-saving, there are potential risks associated with the implantation procedure. One of the most serious complications is superior vena cava (SVC) obstruction. The superior vena cava is a large vein that carries blood from the head, arms, and upper body to the heart. The ICD leads (wires) are positioned inside the SVC and can cause a blockage, leading to symptoms such as swelling in the face, neck, and arms, shortness of breath, and a blue or cyanotic appearance of the skin.

Neck gorging
My current ICD


I first began to experience some of these symptoms in October 2021 and after a clear CT and discharge from hospital A&E, I subsequently went into arrest and was shocked in December 2021 as can be seen on the video below.

Earlier today I suffered a cardiac arrest & was shocked back to life by my ICD. To put perspective on things, I’m lucky to be alive but am feeling very sorry for myself and the pressure I put my family under as a result. I live another day! https://t.co/vrR50DYOpZ@TheBHFpic.twitter.com/GPvfRggBz0

— Damien Bird | Microsoft | MVP2022 (@DamoBird365) December 14, 2021

SVC obstruction is a rare but potentially serious complication of ICD implantation. If SVC obstruction does occur, it can be treated with a procedure called Angioplasty / Venoplasty, in which a small balloon is used to open up the blocked vein, commonly via an incision in your groin, or with a surgical procedure to remove the obstruction. I had an attempted Angioplasty post arrest in December 2021 but the results were not perfect nor permanent. A second Angioplasty was attempted in September 2022 but was deemed too high risk during the operation.

Other risks

Other risks associated with ICD implantation include infection, bleeding, and injury to the heart or surrounding blood vessels. There is also a small risk of the device malfunctioning, which could lead to a failure to deliver therapy when it is needed or inappropriate shocks. Patients with an ICD need to be aware of magnetic fields that can disable their device. It does mean a pat down in an Airport for instance but you also need to be aware of magnets in your headphones or mobile phones, as often these are placed around your neck or in close proximity.

Image courtesy of BBC

Read more about footballer Christian Erikson here.

What other options are there?

An S-ICD, or subcutaneous implantable cardioverter-defibrillator, is a type of implantable device that is used to treat heart rhythm problems such as ventricular fibrillation and ventricular tachycardia. Unlike a traditional ICD, which is implanted under the skin of the chest and has leads (wires) that are placed into the heart, an S-ICD is implanted just under the skin of the chest and has leads that sit just under the skin, on the surface of the chest. The S-ICD does not require leads to be placed within the heart, making it a less invasive option for some patients.

S-ICD benefits, risks and limitations

One of the main benefits of an S-ICD is that it eliminates the risk of lead-related complications, such as lead fractures, infections or SVC obstruction, which can occur with traditional ICDs. Additionally, S-ICDs are less likely to cause damage to the heart or surrounding blood vessels during implantation. S-ICDs are also suitable for patients who are not suitable for traditional ICDs, for example, patients with congenital heart disease, or patients with limited venous access or previous cardiac surgery.

On the other hand, the S-ICD may not be suitable for all patients. For example, it may not be as effective as a traditional ICD in detecting certain types of heart rhythm problems, such as bradycardia (a slow heart rate) or providing pacing therapy. Additionally, the S-ICD may not be able to provide the same level of therapy as a traditional ICD in certain cases, such as cardioversion, which is a procedure that uses electrical energy to restart a normal heart rhythm.

Another potential risk associated with S-ICD is that the leads are situated just under the skin, which can make them more visible and may cause aesthetic concerns for some patients but at this point in my journey, I am desperate for a normal quality of life, that allows me to do some gardening, vacuum the house or go cycling with my kids or friends without feeling like a balloon is about to pop in my chest and head.

An S-ICD is approx 8cm x 6.5cm x 1.5cm in size and includes a lead and rod
My old ICD site
An incision for the lead and rod
The new S-ICD

Check out Footballer Charlie Wyke here who was fitted with an S-ICD and demonstrates where it’s been implanted.

An update on my situation

My surgery, originally scheduled for January 16th, was cancelled. Later, on January 24th, 2023, I went to the Accident and Emergency (A&E) department due to worsening symptoms. After a few days of being discharged, I went to Glasgow for an operation to remove my ICD and replace it with an S-ICD on February 2nd, 2023.

In the future, I will have an Angioplasty to address vein-related narrowing caused by natural tissue growth around the leads, but the tissue will remain after the extraction. This operation only dealt with the complications caused by the ICD, such as tissue growth over the leads and partial blockage. My heart disease remains the same after the surgery.

I hope that by sharing my experience, I can raise awareness among patients and healthcare professionals about the potential complications, risks, and benefits of ICDs and S-ICDs.

The post Living with ICD complications first appeared on DamoBird365.

✇DamoBird365

What is the Power Platform?

Power Platform is a collection of low-code, no-code tools created by Microsoft that allows users to automate business processes, create custom apps, and analyze data. The platform includes five main components: Power Automate, Power Apps, Power Virtual Agents, Power BI and Power Pages.

Power Automate (formerly known as Microsoft Flow) allows users to automate repetitive tasks and processes by creating custom workflows. These workflows can be created in two different environments: cloud-based and desktop-based.

A cloud-based flow is a workflow that is hosted on Microsoft’s cloud servers and can be accessed from anywhere with an internet connection. Cloud-based flows are typically used for automating tasks that involve external services or data, such as sending an email or creating a record in a database. They can also be triggered by events such as a new email arriving in a mailbox or a new record being added to a database.

Desktop flows are a type of robotic process automation (RPA) within Power Automate. Desktop flows are designed to automate repetitive, manual tasks on a user’s computer, such as data entry, copy-pasting, and file management tasks. They can be used to automate tasks across multiple applications, such as Microsoft Office, web browsers, and more. Desktop flows are created using the Power Automate Desktop app and can be run on a user’s computer.

In summary, while Cloud flows are hosted on Microsoft’s cloud servers and can automate tasks that involve external services or data, Desktop flows are PC-based and are used to automate repetitive tasks on a user’s computer.

Power Apps is a platform for creating custom business apps. It allows users to create and use apps that work on any device, and can be integrated with other apps and services. Power Apps is designed to be easy to use, even for people with no coding experience.

Canvas apps and model-driven apps are two different types of apps that can be built with Power Apps.

A canvas app is a blank slate that allows you to create an app from scratch by dragging and dropping various elements onto the screen. For example, you could create a canvas app for tracking employee expenses, where you could add text boxes for entering expenses, a button to submit the expense report, and a data source to store the information.

A model-driven app, on the other hand, uses a pre-built model or template to create an app. This type of app is best suited for creating business apps with a lot of data and forms, for example, a customer management app. The app automatically generates forms, views and charts based on the data model you choose.

Power Virtual Agents is a tool that allows users to create and manage chatbots for customer service and other purposes. It allows users to create a conversational flow and responses with no coding required. The chatbot can be integrated with other systems, such as Dynamics 365, to provide information and perform tasks.

Power BI is a business intelligence and data visualization tool that can be used to connect to, visualize, and analyze data from a variety of sources, including Excel, SQL Server, and cloud-based services. It can help users to gain insights, make data-driven decisions, and share interactive reports and dashboards with others.

Power Pages allows users to create and publish web pages quickly and easily. You can create forms, surveys, and landing pages with a drag-and-drop interface, and customize the look and feel using pre-built templates. Power Pages can be used to create a wide range of web pages, such as customer portals, employee self-service portals, and marketing landing pages. It provides a simple way for users to create and publish web pages without the need for IT or web development expertise.

The post What is the Power Platform? first appeared on DamoBird365.

✇DamoBird365

Add user to Distribution List

It’s still the case that you cannot directly add a user to an Exchange Distribution List via Power Automate as can be seen here Working with groups in Microsoft Graph – Microsoft Graph v1.0 | Microsoft Learn (distribution groups cannot be managed by Graph API) but it’s been over 12 months since I wrote my blog post Add members to a distribution list – Power Automate where initially it was possible to do so. This functionality was removed by design but has never made a come back. In an attempt to explore the options available out there, I came across a PowerShell command to both add and remove users from a distribution group. Why not bring this to an Azure Runbook and that’s what I will demonstrate in the following article.

** Update 11th March 2023 **

When I wrote this updated blog post in January 2023, it was possible to achieve this but when I set it up live for a video recording, it didn’t work and I thought that the distribution list cmdlets for Exchange Online had been removed from the Cloud Based service. Add-DistributionGroupMember (ExchangePowerShell) | Microsoft Learn.

Via twitter another couple of users appear to confirm that this still works. I am trying to find time to re-test and then release a video of it all working 👍

Referring to your todays update, which error do you encounter adding or removing distribution list members? With the setup according to my following post, I‘m able to adjust DL membership with managed identity-based auth in Azure runbooks. (1/2)https://t.co/y0b8fdLFx8

— Dustin Schutzeichel (@CloudProtectNja) March 11, 2023

The Solution

With an automation account on Azure, you can write PowerShell Runbooks. In this case I have written two very simple scripts that accept the distribution group name and member email address as parameters. We can view these Runbooks from our Automate Account:

Runbooks for Adding and Removing Members of a distribution group.

Below we can view the code to both add and remove a member to/from a distribution list on Exchange.

<#
    .DESCRIPTION
        A sample script to add a user to a distribution group

    .NOTES
        AUTHOR: Damien Bird
        LASTEDIT: 9th January 2023
#>

param(
[string]$DistroGroup,
[string]$Email
)

try
{
    "Logging in to Exchange..."
    Connect-ExchangeOnline -ManagedIdentity -Organization abdndamodev.onmicrosoft.com
    "Adding user..."
    Add-DistributionGroupMember -Identity $DistroGroup -Member $Email
    "User Added"
}
catch {
    Write-Error -Message $_.Exception
    throw $_.Exception
}
<#
    .DESCRIPTION
        A sample script to remove a user from a distribution group

    .NOTES
        AUTHOR: Damien Bird
        LASTEDIT: 9th January 2023
#>

param(
[string]$DistroGroup,
[string]$Email
)

try
{
    "Logging in to Exchange..."
    Connect-ExchangeOnline -ManagedIdentity -Organization abdndamodev.onmicrosoft.com
    "Removing user..."
    Remove-DistributionGroupMember -Identity $DistroGroup -Member $Email -Confirm:$false
    "User Removed"
}
catch {
    Write-Error -Message $_.Exception
    throw $_.Exception
}

In order to call these Runbooks from the Power Platform, we have a few options that I am aware of. The first most straightforward one is the Azure Automation connector which allows you to run a job on Azure. Below we can see two Power Automate actions to remove Henrietta from our New Distro Group. We simply Create a job and check the output of the jobs Success.

Azure Automation in Power Automate

The other option available to us are adding a webhook to the function and calling it direct Start an Azure Automation runbook from a webhook | Microsoft Learn or using API Management Import an Azure Function App as an API in API Management – Azure API Management | Microsoft Learn, and build a custom connector so that we can easily call the new function from across the Platform using actions Power Platform connectors overview | Microsoft Learn.

Distribution group in Exchange

As the Azure Function to add or remove members is run, the distribution group is updated to reflect any changes.

The setup

We need to setup 3 things:

  1. An automation account, to run our RunBooks
  2. A managed identity to enable access to Exchange via PowerShell
  3. Our Runbooks, to run our PowerShell scripts

How to setup a Managed Identity

Thankfully the documentation for this is good and it can be achieved with a combination of PowerShell and Azure Portal. You will need to install the Azure Az PowerShell and Graph SDK in preperation. In summary:

  1. Create an Automation account, in my case I called it “ExchangeFunctionality” Quickstart – Create an Azure Automation account using the portal | Microsoft Learn
  2. Save the GUID of your managed identity into a variable in PowerShell $MI_ID
  3. Add the Exchange Online PowerShell module to the managed identity via the Azure Portal. This is a case of adding a module “ExchangeOnlineManagement” to your Automation account.
  4. Connect to Graph via PowerShell and grant permissions for the managed identity to call Exchange Online.
  5. Assign an Azure AD role to the managed identity that fits with the permissions of your script. This is a combination of Azure Portal to assign and PowerShell to confirm. Albeit in my case the PowerShell failed as the Management Directory Role had already been assigned.

Setting up the Runbook and PowerShell Scripts

Another process that’s well documented Manage runbooks in Azure Automation | Microsoft Learn and I have a previous demo from February 2021 where I brought the ability to enable/disable external sharing on SharePoint to Power Automate Power Automate meets PowerShell in Azure. The basic scripts for this process are shared earlier in this article.

What other use cases have you got for Azure Runbooks? Did you use the Azure Automation connector, webhook or API Management? Please let me know in the comments below.

The post Add user to Distribution List first appeared on DamoBird365.

✇DamoBird365

Efficiently Filter a JSON object in Power Automate

Here is an interesting sample JSON object {}, that contains a JSON array [] with 3 objects. The final solution will have 3000 objects. The aim is to retrieve two key values from the object whilst checking another nested array for a common string ‘ACABA’. Whilst it is perfectly acceptable to use an apply to each and loop through all 3000 objects, this will eat into your 24 hour API limit on the Power Platform and whilst this hasn’t really been a concern in the past, Microsoft are lowering those limits and will begin enforcement once a new admin level report has been released.

On a sliding 24 hour period, a user will have 6,000 api requests as an O365 licensed user, this will increase to 40,000 for those that are licensed per user and 250,000 as a per flow. There is no time like the present to understand efficiency in your flows.

The problem

The solution

In two simple actions (and therefore two api calls), it is possible to re-purpose the array and filter the result. With a Select Action we can include the 3 key values, the id, product id and the tags array. We can then filter on this new array by converting the tags array into a string. We cannot easily repurpose the tags array in the select without using something like Xpath, so I will leave that for another day. It is worth noting that string comparison is case sensitive, therefore you might want to consider using toupper() or tolower().

With the original object in the compose, the Select is repurposing the data from the object to form a new array. We must supply an array [] to the select, in this case the data array []. This can be accessed from the result object {}. An expression for this might look like follows outputs(‘compose’)?[‘result/data’].

Output of the select contains a repurposed array.

With this newly formed array, we can then filter on the tags array as a string for the key string ‘ACABA’ using the condition “contains”. This will reduce the original array of three objects to two, as we know the third does not contain the required string. This flow will run in a matter of seconds and consume only two API calls (three if you count the compose containing the original Object).

The expressions used in this solution are:

item()?['id']
item()?['productid']
item()?['tags']
string(item()?['tags'])
Repurpose and filter an array using Power Automate

How did you find that? Please let me know below and make sure you check out my other content on YouTube. Find me on social media platforms as DamoBird365 and make sure you say hi. Thanks for reading.

The post Efficiently Filter a JSON object in Power Automate first appeared on DamoBird365.

✇DamoBird365

Bulk Import Tasks into Planner

Using Power Automate and Excel (or any other available data source 😉), you can bulk import tasks into planner using Power Automate. I have previously recorded a video on this process and use both a tasks table and a config table to support this cloud flow. This will enable you to dynamically choose the group and plan name, as well as import planner tasks involving a title, bucket id / name, start and due date, assigned user ids, a category, priority, check list and file attachment(s)! This flow is not for the faint hearted but should support you with your goal to build a Planner Power Automate integration.

sample excel file with tables
Sample Excel Tables

A sample flow and template file is available to download from my github in the video description and at the bottom of this article, but I would strongly encourage you to watch the video to see how I created this solution.

As some of the expressions are complex, I have made them available via this post to suppliment the original video.

The Trigger and Initial Setup

The flow is currently manually triggered, you could of course create an excel file and run on a recurrence trigger or use when a file is created and simply upload a template file to a watched document library. Based on the requirement to import my solution (legacy) there are some short notes to follow and you will need to update the first two list rows actions which retrieve the tasks and config table from Excel.

trigger and two list rows actions

Getting the Group ID

In order to run this solution dynamically, we need to retrieve the group id of the plan. You will tend to see that I use scopes to bring actions with a particular purpose together. We list the teams that we have access to, filter that teams list where the team name is equal to the name from our config table and then output the group id into a compose.

The expressions used here are:

outputs('ListRowsTableConfig')?['body/value']?[0]?['GroupName']

body('Filter_array_Teams')?[0]?['id']

Note that we have used the integer value [0] to retrieve the first object from the array, otherwise we would find ourselves in an apply to each loop. The alternative is to use first().

Get group ID via Power Automate

Listing Plans and Buckets

Below we list all of the plans for an existing group and then filter those plans based on the plan name from the excel config table. Finally we list all buckets that exist on that plan.

The expressions used here are:

outputs('ListRowsTableConfig')?['body/value']?[0]?['PlanName']

body('Filter_array_Plan_ID')?[0]?['Id']
List plans and buckets

Create Buckets that are missing

Next we must create bukets that are missing from the action above. For example, we have a plan with 3 buckets but our excel table has 4 buckets, maybe we have introduced a new quarter (Q4) or project stage to our table but not yet configured it on our plan. The next few actions will identify the missing buckets and create them for us.

The first select (in text mode) allows us to create an array of bucket id’s (note that we use the name and not the guid) used in our excel sheet (as seen above) and then using a union, we can get the distinct bucket id’s (i.e. names). We can then use another select to get the bucket names from the list buckets pre creation action and then filter from the unique buckets (distinct buckets on excel), where the buckets from the planner does not contain the name of the bucket from distinct buckets. The expression used here is item(), to refer to each bucket id / name.

create new buckets

Then, for each bucket identified by the filter, we loop and create those buckets. The input to the apply to each is the output from the filter array, the name is based on current item, group id based on the compose of the group id and plan id based on the plan id from the filter.

Expressions used here are:

body('Filter_array_Plan_ID')?[0]?['Id']

List buckets post creation and create an Object {} of Buckets

We then list the buckets again post creation in order to get the internal bucket id’s as used to create the tasks in the final stages of our flow. These actions rely on the unique internal guid and not the friendly name you might call your bucket. As the buckets are returned as an array, we have repurposed this data to create an object. The advantage of an object is that you can call the guids by the friendly name. I have seen other solutions require that you store the guid in the table, this is not the case here. If you want to understand more about objects, watch this video https://youtu.be/PD980sKKx0E.

The select, in text mode, allows us to create an array of strings based on a key/value pair –

"Bucket Name" : "Bucket Guid"

We can then use this array of strings to our advantage, join them on a comma, add a { and } to the start and end and then parse as JSON. All of which you can watch in my video above. The alternative is to filter for each bucket guid as you create your tasks or store that guid in your sheet. I think this method is easier once setup and will create something similar to:

{
"Bucket Name1" : "Bucket Guid1",
"Bucket Name2" : "Bucket Guid2"
}

Expressions used here are:

concat ('"', Item()?['Name'] , '":"', item()?['id'], '"')
json (concat ('{', join(body('SelectBuckets'), ','), '}'))

Create tasks on our Plan

For each row in our excel table, we want to create a new task in our planner. For this we have a scope, with 7 actions! The first action, will create a simple task, with the bucket id, start and due date, assigned user ids, category and don’t forget your priority (not in screenshot but hidden away at the bottom of the action parameters).

Expressions used here are:

Plan ID: First(body('Filter_array_Plan_ID'))?['id']

Bucket ID: outputs('ComposeBucketsArray')?[items('Apply_to_each')?['Bucket Id']]

Due Date: If(empty(items('Apply_to_each')?['Due Date Time']),formatDateTime(Adddays(items('Apply_to_each')?['Start Date Time'],7),'yyyy-MM-dd'),items('Apply_to_each')?['Due Date Time'])

Pink, Red, Yellow etc: If(equals(items('Apply_to_each')?['Category'],'Pink'),true,false)

Creating a Check List

Next we must create an array for our check list and each item in the list must have a unique ID, a title (based on our excel table column) and an ischecked value, for which we have set to false, i.e. not checked. For that we use a select and a filter (to remove any objects with a blank title).

Expressions used here are:

From: range(0,length(split(items('Apply_to_each')?['Check List'],','))) 
id: item() 
title: split(items('Apply_to_each')?['Check List'],',')?[item()]

To explain in a bit more detail, the range creates an array of numbers from 0 based on the length of an array. In the excel table there is a task list that is comma seperated. Split will create an array of tasks and length will count those values. This is our unique ID.

sample check list
sample check list

In the Map, we have id, which is item() and that refers to the unique ID from the range i.e. [0,1,2]. The title is based on the integer index of an array, based on the split of the comma seperated list. Therefore if you have 3 check list items, an array [item1,item2,item3] is created. To select item2, we use index [1], item3 is index [2]. Indexes start from [0], i.e. first(). Range allows us to create both a unique ID and select list items by integer.

The filter array simply removes any values from the array where the title length is null length.

Creating an attachments array

The attachments array is based on SharePoint links to a file(s), comma seperated. As before we use a select and a filter array to remove any objects where the resourcelink is blank. The alias is fixed, resource link is a link to said file and is accessed based on integer index (same as above) and type is based on word or excel and a check is done on the file extension.

Expressions used here are:


From: range(0,length(split(items('Apply_to_each')?['File Attachment'],','))) ResourceLink: split(items('Apply_to_each')?['File Attachment'],',')?[item()] type: If(endsWith(split(items('Apply_to_each')?['File Attachment'],',')?[item()],'lsx'),'E','W')

Update the Task with attachments, check list and make the check list visible

Lastly we can add the references i.e. the links and the check list from both of the respective filter array action outputs. This is achieved with the update task details action. If you also want the check list items to be visible, you can enable this with a call to the Graph API based on the following expressions:

URI: https://graph.microsoft.com/v1.0/planner/tasks/@{outputs('Update_task_details')?['body/id']}/details

Body: { "previewType": "checklist" }

Content-Type: application/json

CustomHeader1: If-Match: @{outputs('Update_task_details')?['body']?['@odata.etag']}

CustomHeader2: Prefer: return=representation

The complete apply to each excel row to create and update a task looks like follows:

The sample excel template and template flow (legacy) can be downloaded from GitHub. Please let me know how you have developed your own solution in the comments below or via Social Media via DamoBird365.

The post Bulk Import Tasks into Planner first appeared on DamoBird365.

✇DamoBird365

Export Power BI Report or Visual to File

Power Automate has an action “Export to File for Power BI Reports” which enables you to export a Power BI Visual or Page as a PDF, Power Point (PPTX) or Image (PNG) file. Below you will learn about the Workspace requirements, how to identify the report page and visual names and how to implement a report level filter. We will explore common errors, how to setup your Power Automate Cloud flow, some of the limitations, the Power BI Playground and finally saving the exported file to SharePoint and sending via Teams or Email.

1. Common Problems / Errors

Some common errors that you might experience when trying to use this action include:

1.1. FeatureNotAvailable

{"error":{"code":"FeatureNotAvailableError","pbi.error":{"code":"FeatureNotAvailableError","parameters":{},"details":[]}}}
Power BI Error FeatureNotAvailableError

This refers to the licensing requirements, specifically that you must have an embedded capacity on your tenant for BI. Exporting a Power BI report to file using the exportToFile API, is not supported for Premium Per User (PPU). You can follow this Microsoft guide to setup an embedded capacity.

1.2. Export report to image is disabled on tenant level

This error refers to a tenant wide setting that specifically prohibits the use of the Power BI API to export reports as images.

Export report to image is disabled on tenant level

A setting “Export reports as image files” is available via the Power BI Admin Portal, Tenant Settings. You can enable this feature either for the whole organization or maybe a specific security group. You have to be a global admin or Power BI service admin to access the Power BI admin portal.

2. The Power Automate Action

Let’s now explore the action in order to export a report or visual, with a filter.

2.1. Action configuration

Based on the screenshot below, 1. the initial settings are straight forward, specify the Embedded Capacity Workspace, the Report and the Export format (in my case an image as png).

2.2. Report Level Filters

We can implement a filter based on a single table/field, some sample filters are available to view here. The filter is not based on the physical slicer in a report itself. Therefore, if you plan to run a report / visual extract, your drilling does not appear to affect the output of the visual. I overcame this by embedding a Power App, linking the single data table to the App and sending the selected data to Power Automate to implement as a dynamic filter. I will write this up if there is interest, albeit I plan to do a video demo on my YouTube.

2.3. Page Name

Page and Visual name go together, albeit Visual is optional. You must specify a Page in order to export and you can in fact export multiple pages / visuals in one action by creating an array. Note that if you specify multiple pages/visuals, the output of the action will be a zip file of specific files (e.g. png), rather than an array of png’s. You would therefore need to save and extract the images from the zip file output.

Retrieving the page name, in this case “ReportSection2” is rather straightforward. Open up your report and navigate to the page in question. The page name is in the URL and appears as the last value after the reports guid (as highlighted below). If you are unable to retrieve it this way, don’t worry, there is an alternative method that I will explain for your visuals.

ReportSection2 is the page name - based on the URL

If you are looking to retrieve multiple pages / visuals, turn on text mode for the Pages parameter, you can supply an array of objects for pageName and visualName. Remember that a Select Action is very handy here and the output of a select could be supplied directly to the Pages parameter.

2.4. Visual Name

The visual is that specific pie chart or bar chart etc. Its appearance is not based on any filters you apply to the report in real time, remember that you can specify a single filter using the parameter described above.

Retrieving the visual name is a bit more complicated as you will need to use the Power BI Playground to retrieve these.

3. Power BI Playground

We will now explore how to use the Playgroud to retrieve both the page and visual names. Open the playground and choose try the developer sandbox.

developer sandbox

You will then be asked how you would like to start and of course we want to “Use My Own Report” and click on “Select Report”. Connect to your report and hopefully you will see something like follows:

Note that I have expanded the properties tab to highlight 3 code samples that you will require. You can run these by clearing out any existing code on the screen and dragging the code block across. You can then run the code and view the output using the browser developer tools. Access to the “console window” in developer tools is achieved quickly, by pressing ctrl + shift + j. Alternatively, you can access via the browser settings, more tools, developer tools and make sure you are on the console tab. Here is a guide for edge if you are unsure.

3.1. Get active page

This will return and connect to the active page of the report that you are viewing. If you have multiple pages, make a selection on screen and then run this code. It will output the active page name in the console window – in my case ReportSection3. Once connected to this page, you can select and run get visuals.

get active page from Power BI Playground

3.2. Get visuals

This will return an array of all of the visuals for the specific page you have selected above. You can expand each object / visual to see the type and title which should help identify the name of the visual you are looking to export. Note that for my example the name is VIsualContainer1. I can supply this value directly to my Power Automate Action.

Visuals array, make a note of the name.

3.3. Get pages

Hopefully this needs no explanation, if you are really struggling to identify page names, run this and check the output on the console window.

Action Output

The easiest way to use the output is to save it to SharePoint or OneDrive. The filecontent dynamic value from the Power BI action is a base64 encoded image (unless of course you have specified multiple pages/visuals in which case it would be a zip file).

By saving the image to SharePoint you can embed the image using the thumbnail links available from the get file properties action. If you combine this with the img src html tags, you can display the image in a teams message. I did attempt to send the image as a base64 image for teams but at this point in time there is a 28KB image limit and so using the thumbnail link seemed like the next best option. You could of course directly embed the image into an email using base64 as demonstrated by w3docs. Note you will need to construct an expression to get the base64 encoding from the $content key.

outputs('Export_To_File_for_Power_BI_Reports')?['body']?['$content']
A flow that saves the Power BI visual to PNG and sends via Teams

What’s next?

If your organisation already has the embedded workspace, I would love to hear if you have managed to implement the above automation. Have you sent images via Teams, Email or maybe even embedded them into Word templates? Did you use a Power App to implement a filter? Let me know in the comments below or drop me a message via my contact form.

The post Export Power BI Report or Visual to File first appeared on DamoBird365.

✇DamoBird365

Disable Microsoft Forms via Power Automate

It is technically possible to disable a Microsoft Form from Power Automate. Why would you want to do this? Maybe it’s time limited – ok sure you can specify an end date via the existing UI. But what if that date or time was to be dynamic?

What if you wanted to turn off the form based on the number of submissions? My use case is based on users signing up to an event. We have 10 spaces, an 11th users tries to sign up. Wouldn’t it be nice if we could automatically disable the form on the 10th user and then notify the 11th that unfortunately we’ve reached capacity? Or depending on use case, maybe that 11th user will see that the form is now closed and they are too late.

What I am going to share with you is unsupported. I have previously demonstrated how you can download the data from forms direct from the API back in August of last year (2021). You can watch that here.

Shutting down a Microsoft Form

If you have built a form and navigate to the elipses and settings, you can untick the box to Accept Responses and set a message. We are going to automate this in one single action within Power Automate.

The API endpoint consists of a tenant id, group or user id and a form id. If you are looking to read up on the specifics, I suggest you watch my video or read up on Hiro’s blog post here

Below is the body that you must submit in order to update the closed status to true or indeed false if you want to re-enable your form. You can also set the form closed message. I have made this dynamic to indicate when the form was closed. This is displayed to the end user.

{
  "settings":
    "{
      \"FormClosed\":true,
      \"FormClosedMessage\":\"We have reached capacity - @{formatDateTime(utcnow(),'dd/MM/yyyy HH:MM:ss')}\"
    }"
}
Microsoft Form personalised message

The single action

We simply send a PATCH to the forms api, with basic header information and the json output (per above). In order to do this we need to update the tenant, group/user id and forms id based on the Uri in the screenshot below.

Single action to disable or enable a microsoft form
/formapi/api/16c901d1-9763-49b1-961c-6cd701f5d0f7/users/6c646262-4f6f-4bfb-88c7-86b3d1252cac/forms('0QHJFmOXsUmWHGzXAfXQ92JiZGxvT_tLiMeGs9ElLKxUNEtSR05HNjNMNlZDMERZTEMzREc5SDFZWi4u')

How would you implement this?

This is all down to your specific use case. It’s possible to return the number of rows on a form submission and based on that number you could implement a condition. Similarly, if you are adding users to an event, you could count the number of attendees for that event. Maybe you could even delete the event from the form instead of disabling the form? I think that’s a challenge for another day.

If the number of attendees is greater than 4, disable form, else add them to the event
If the number of attendees is greater than 4, disable form, else add them to the event

What do you think? Is there a need for this functionality natively? Let me know in the comments below and if you are looking to add users to an outlook event, why not read this post?

The post Disable Microsoft Forms via Power Automate first appeared on DamoBird365.

✇DamoBird365

Update Event and Hide Attendees

It would appear that the standard actions for Events in Power Automate will send your attendees an email each time a new attendee is added or updated. In addition to this, attendees are visible to all. Using the Graph API and the Update Event call it is possible to add attendees without notifying others. It is also possible to hide all other attendees from each other.

I’ve previously blogged about managing events using Microsoft Forms. Put simply, enable users to register for a specific event using Microsoft Forms and Power Automate. But unfortunately this resulted in all attendees receiving an update when someone new completed the form. If you combine this blog with my solution above or watch my video on my YouTube, you can build yourself a very handy event registration system using Power Automate.

Hiding Attendees

The first challenge is to hide attendees. This can be run either as soon as a new event has been created OR each time a new attendee is added. There doesn’t appear to be a side effect of doing the latter. It’s one action, using the graph connector Send an HTTP request. Note that I have a compose containing the Event ID to simplify the URL in all of the following actions.

hide attendees on an outlook event

In order to hide the attendees, we must set the parameter hideAttendees to true. By default this is set to false on any new event. When I view the event in my personal calendar as an attendee, I can only see my own attendance. As the event organiser I can see the status of all individuals invited and I will continue to get updates when users accept or reject.

only one attendee visible in new calendar event

Adding new attendees to an event using the API

First of all, we need to send an HTTP request to GET the details of the existing event. This includes a response with the existing attendees. According to the documentation, if we then send a request to update the attendees, only those that have been added will get an email and this is certainly my experience during testing – unlike the native Outlook Connector experience.

We then need to create a new array that includes an object for a new attendee. We can specify if they are required or optional and of course their email address. I have built this using a compose action. The Microsoft documentation has further details and examples of settings that you could consider updating using this endpoint.

Now that we have the new attendee in an array, we need to merge the existing attendees with the new one. We can do this with the union expression. **Note** that I have retrieved the Attendees array by extending the default expression for the body of the HTTP request with ?[‘attendees’]. We then must place this into an Attendees object. Using a compose, we can insert the object squirly brackets {} insert the key “Attendees” and use the union expression below to join the existing and new attendee.

union(outputs('Send_an_HTTP_request_Get_Attendees')?['body']?['attendees'],outputs('Compose_New_Attendee'))

Finally we can PATCH the updated attendee object back to the event. This will ensure that the new attendee is added and sent an email, existing attendees will not receive an email and if you have implemented the hideAttendees parameter, they won’t see how many other users are attending the event.

I would love to hear if this solution has worked for you? Have you previously encountered this issue yourself in your own development? Let me know in the comments below.

The post Update Event and Hide Attendees first appeared on DamoBird365.

✇DamoBird365

Restore deleted Flows as an Admin

As an admin, you will only know too well, that if a user deletes a flow, you will need to raise a call with Microsoft to restore that flow to Power Automate. Until May 2022 that is, when the Restore-AdminFlow cmdlet was released.

There are native actions in Power Automate for adminstering flows but there is no current timescales for releasing an action to restore flows directly. If you were wondering if it was possible to restore flows from Power Automate or Power Apps using the Platform, then I have a solution for you – Azure Runbooks. I have a blog post showing you how I built a Runbook to change the sharing options of SharePoint using Power Automate.

It’s worth noting at this point, that the Restore-AdminFlow cmdlet can only restore non solution aware flows!

Deleting your flows

Accidentally or intentionally, if you delete flows from Power Automate, you will not be able to restore them. You now have up to 28 days to action a restore using PowerShell.

Deleting a flow

In order to restore a flow you must either know the FlowName, i.e. the GUID or the DisplayName, i.e. the friendly name you have given your flow. My demonstration below will return both by determining the flows that have been deleted on a specific enviroment and will use the GUID to restore them.

Using PowerShell to restore a flow

You will first need to install the PowerShell support for PowerApps, available in the guide here. You will also need to be an environment admin for the environment that you wish to restore flows. This is not for end users to restore their own flows but for your IT department, who could adopt this solution to allow self service as I will demonstrate.

I have created two basic PowerShell scripts. The first of which will determine an array of flows that have been deleted in the past 28 days. This is achieved by comparing two tables of data from the Get-AdminFlow CmdLet and outputting either a JSON array for an automated RunBook and Power Automate integration or as a comma seperated string of FlowNames (the flow GUID). The second script will accept a comma seperated list of FlowNames (i.e. GUIDs) and restore each of those flows using Restore-AdminFlow CmdLet. Restored flows will be disabled by default.

#DamoBird365
#PowerShell script to demo how to retrieve an array of deleted flows from a default environment
#Official Docs https://docs.microsoft.com/en-us/powershell/module/microsoft.powerapps.administration.powershell
#www.DamoBird365.com 
#www.youtube.com/c/DamoBird365

param (
    [string]$EnvironmentName = "Default-rg70379a-th7f-45c9-b7d4-hn207c7ca554"
)

#Credentials if using RunBook
#$myCredential = Get-AutomationPSCredential -Name 'PPEnvironmentAdmin' 
#$userName = $myCredential.UserName
#$securePassword = $myCredential.Password
#$password = $myCredential.GetNetworkCredential().Password

#Sign In To PowerApps PowerShell
Add-PowerAppsAccount # -Username $userName -Password $securePassword #If using RunBook

#Get ALL Flows (excluding deleted)
$NonDeletedFlows = Get-AdminFlow -EnvironmentName $EnvironmentName

#Get ALL Flows (including deleted)
$AllFlowsIncDeleted = Get-AdminFlow -EnvironmentName $EnvironmentName -IncludeDeleted $true

#Compare non with all to get deleted
$DeletedFlows = Compare-Object -ReferenceObject $NonDeletedFlows -DifferenceObject $AllFlowsIncDeleted -Property FlowName -PassThru

#Format Result as JSON
$DeletedFlowsJSON = $DeletedFlows | Select-Object -Property FlowName, DisplayName | ConvertTo-Json

Write-Output ($DeletedFlowsJSON)

#If you want a comma seperated string of FlowNames for Testing in PowerShell
$combined = $DeletedFlows | ForEach-Object { $_.FlowName }
$result = $combined -join ','
Write-Output ("")
Write-Output ($result)
#DamoBird365
#PowerShell script to demo how to restore deleted flows from a default environment
#Official Docs https://docs.microsoft.com/en-us/powershell/module/microsoft.powerapps.administration.powershell
#www.DamoBird365.com 
#www.youtube.com/c/DamoBird365

param (
    [string]$FlowsToRestoreString = "59d1cdd1-542e-4c13-8a59-b729221ebef5,7cf92a9d-c345-456b-9123-ce83291ab4b0",
	[string]$EnvironmentName = "Default-rg70379a-th7f-45c9-b7d4-hn207c7ca554"
	
)

#Credentials if using RunBook
#$myCredential = Get-AutomationPSCredential -Name 'PPEnvironmentAdmin' 
#$userName = $myCredential.UserName
#$securePassword = $myCredential.Password
#$password = $myCredential.GetNetworkCredential().Password

#Sign In To PowerApps PowerShell
Add-PowerAppsAccount # -Username $userName -Password $securePassword #If using RunBook

#Split string into an array
$FlowsToRestore = $FlowsToRestoreString.split(",");

#For each FlowName in the array, restore the flow 
$FlowsRestored = foreach ($Flow in $FlowsToRestore)  { Restore-AdminFlow -EnvironmentName $EnvironmentName -FlowName $Flow; Start-Sleep -Seconds 1 }

Write-Output ($FlowsRestored)

I demo how to use these scripts in my video. Note that I have commented out the credentials as used by the RunBook which is perfectly fine if all you want to do is run the PowerShell locally to restore flows ad-hoc.

RunBook Automation

For my Power Automate and subsequent Power App solution, I built two Azure RunBooks using the above scripts. One is appropriately called GetDeletedFlows and will return a JSON Array of deleted flows, the second RestoreFlows, will restore those deleted flows as determined by an input of FlowName GUIDs.

Azure RunBook to restore deleted flows

I have configured Credentials within my Automation Account so that I can call these from the RunBook. I have also installed the PowerApps admin module which is a requirement of running these PowerShell scripts online.

PowerApps Admin module

Restore Flows via Power Automate

To restore flows via Power Automate, you need to use the Premium action Create Job and Get Job for Azure Automation. The RunBook accepts a default environment which you can retrieve from the URL of your Maker Portal and you need to make sure you select “Wait for Job” under advanced. Get Job will then retrieve the JSON Array of deleted flows and it’s with this data that I have simply converted the FlowName GUID’s into an array. You could of course filter this array by Display Name at this point if required and only restore select flows.

With the FlowName (GUIDs) as an Array, we can simply join() them to form a comma seperated list of GUIDs and this can be passed back to the second RunBook to restore those flows. The second RunBook will accept both the default environment and of course the comma seperated list of flows to restore.

View Deleted Flows and Restore via Power Apps

Using the same actions above we can simply take the JSON array output from an equivalent flow called directly from PowerApps and convert to a collection. This can then be displayed within a PowerApp as a Gallery. From the Gallery we could in theory allow multiple selections and restore multiple flows, but for the purpose of my demo I have a flow that will restore a single flow based on the current item being selected. To restore multiple flows, all we need to do is pass a comma seperated list of FlowName GUIDs.

The first Flow triggered from the app will return the JSON array as an output back to the app. As there is no native way to convert a JSON array to a collection, I have used the technique as described in the following post on the Microsoft Forum.

Flow Triggered from PowerApps to get an array of deleted flows

The second flow is triggered by selecting an item on the gallery and the FlowName GUID of the current item is sent to the flow as input in order to pass this to the RunBook.

Flow accepts a GUID in order to restore a deleted flow

The app in terms of appearance is rather basic and includes a Gallery where the items are based on the collection retrieved from the first Power Automate Flow. I trigger this on a manual button press but it could be onvisible of the current screen. The restoration of flows is triggered by selecting a current item and upon completion, I remove the current item from the collection and then refresh the data source. The restored flow should no longer be in the returned data source as the flow has now been restored. Note that it can take up to 30 seconds for the whole process to complete.

Power App interface to restore deleted flows

The post Restore deleted Flows as an Admin first appeared on DamoBird365.

✇DamoBird365

Split a Workbook into Multiple Worksheets

It is possible to quickly split a workbook into multiple worksheets based on a key column using Office Scripts in Power Automate. For instance, if your Excel sheet contains data relating to sales and for each of those sales, the sales manager responsible is in a column, you can automatically detect the unique names, create sheets for each of them, and populate those sheets with the sales relevant to that manager. Furthermore, if you would rather unique workbooks for each of those distinct names, I’ve got a solution for that too and you could use the final script just to bulk load data into Excel efficiently, without using any add a row actions.

If you are new to Office Scripts, I would recommend you take a look at the Microsoft Documentation. Key things to note are that data sent to and from a script in Power Automate is limited to 5MB and the range of cells is limited to 5 million. A user can make 1600 api calls to Office Scripts per day. If you are familiar with the out of the box add a row to a table action, you will be aware how long it can take to load data into Excel. Office Scripts can perform bulk data loads, formatting, ordering, all from a single action. I’ve a few Office Scripts articles on my blog too and even more on my Office Scripts – YouTube playlist!

Sample Excel Workbooks

I headed over to Kaggle and picked up a couple datasets, the first had 400+ rows for the Top 10 Highest Grossing Films (1975-2018) and another with 100,000+ for Top 48 automakers daily stock prices 2010-2022. The former had a column for the main genre, of which there were 16 distinct values, the latter 48 (!) car manufacturer codes. Using this data I was able to test the reliability of my proof of concept.

The scripts

I have three scripts for this solution, albeit some of my inspiration and code came from Microsofts own sample solution for Combine workbooks into a single workbook. Running these from Power Automate is really straight forward using the single action, an example of which can be seen below:

Office Script action - run script
The key column is “Main_Genre”, table name “Table1” with a sheetname of “blockbuters”

The three scripts are as follows:

Split Workbook into Multiple Worksheets Based on Key Column

This has three input parameters as seen above, the key column name you want to split the workbook on, a table name (if one exists and / or to create if it doesn’t) and a default sheet name. The script has the ability to create a table commented out, a clever feature of Office Scripts if you are going to be receiving an excel file without a table created. Then it will retrieve the unique names from the column defined, loop through these creating a new sheet and filtering the data on the main sheet before copying the data into the new sheet for each loop. The only limitation I have seen here is that sheet names must be 30 characters or less. This script will run in a matter of seconds and result in an updated workbook with multiple sheets of data based on the unique key column name data. 17 sheets for the blockbuster example and 49 for the car manufacturer, remembering that the original worksheet will remain untouched.

function main(workbook: ExcelScript.Workbook,
  KeyColumn: string = "ManagerName",  //Specify Key Column Name to Filter On
  MainTable: string = "Table1",  //Either existing OR new table name
  SheetName: string = "Sheet1"  //Default sheet name
) {

// Get the worksheet by name
const selectedSheet = workbook.getWorksheet(SheetName);

// Alternatively, get the first worksheet (uncomment below and comment out above)
// const selectedSheet = workbook.getFirstWorksheet();

// Create a table using the data range.
let newTable = workbook.addTable(selectedSheet.getUsedRange(), true); //***Comment out if new table not required
  newTable.setName(MainTable); //***Comment out if new table not required

//Define Table Name
  const TableName = workbook.getTable(MainTable);
  
//Get all values for key column
  const keyColumnValues: string[] = TableName.getColumnByName(KeyColumn).getRangeBetweenHeaderAndTotal().getValues().map(value => value[0] as string);

 // Filter out repeated keys. This call to `filter` only returns the first instance of every unique element in the array.
  const uniqueKeys = keyColumnValues.filter((value, index, array) => array.indexOf(value) === index);
  console.log(uniqueKeys);

  // Filter the table to show only rows corresponding to each key and then for each filter
  uniqueKeys.forEach((key: string) => {
    TableName.getColumnByName(KeyColumn).getFilter()
      .applyValuesFilter([key]);

    // Get the visible view when a single filter is active.
    const rangeView = TableName.getRange().getVisibleView();
  
    // Create a new sheet
    let sheet = workbook.addWorksheet(`${key}`);
    
    // Set the range of data from the filter
    let range = sheet.getRangeByIndexes(0, 0, rangeView.getRowCount(), rangeView.getColumnCount());
    
    //Load Data into new Sheet based on selected range
    range.setValues(rangeView.getValues());

  });

//Clear Filter
  TableName.getColumnByName(KeyColumn).getFilter().clear();

}

Split Workbook into Data Arrays based on Key Column

This starts off very similar to the script that creates multiple sheets, and again has the option to create a table if the sheet does not contain one. The main difference here is that rather than creating sheets, the filtered data is passed to an array as defined by worksheetInformation and is returned back to Power Automate via the Output. This allows the final script to be called with the output as input, and from this we can create multiple unique Workbooks with the unique data in the main default worksheet, sheet1.

function main(workbook: ExcelScript.Workbook,
  KeyColumn: string = "ManagerName",  //Specify Key Column Name to Filter On
  MainTable: string = "Table1",  //Either existing OR new table name
  SheetName: string = "Sheet1"  //Default sheet name
) {

  /*Commented out if new table not required
  // Get the worksheet by name
  const selectedSheet = workbook.getWorksheet(SheetName);
  
  // Alternatively, get the first worksheet (uncomment below and comment out above)
  // const selectedSheet = workbook.getFirstWorksheet();

  // Create a table using the data range.
  let newTable = workbook.addTable(selectedSheet.getUsedRange(), true); 
  newTable.setName(MainTable); 
  */

  // Create an object to return the data for each workbook.
  let worksheetInformation: WorksheetData[] = [];

  //Define Table Name
  const TableName = workbook.getTable(MainTable);

  //Get all values for key column
  const keyColumnValues: string[] = TableName.getColumnByName(KeyColumn).getRangeBetweenHeaderAndTotal().getValues().map(value => value[0] as string);

  // Filter out repeated keys. This call to `filter` only returns the first instance of every unique element in the array.
  const uniqueKeys = keyColumnValues.filter((value, index, array) => array.indexOf(value) === index);
  console.log(uniqueKeys);

  // Filter the table to show only rows corresponding to each key and then for each filter
  uniqueKeys.forEach((key: string) => {
    TableName.getColumnByName(KeyColumn).getFilter().applyValuesFilter([`${key}`]);

    // Get the visible view when a single filter is active.
    const rangeView = TableName.getRange().getVisibleView();
    // Get values from filter
    let values = rangeView.getValues()

    worksheetInformation.push({
      name: `${key}`,
      data: values as string[][]
    });

  });

  //Clear Filter
  TableName.getColumnByName(KeyColumn).getFilter().clear();

return worksheetInformation

}

// An interface to pass the worksheet name and cell values through a flow.
interface WorksheetData {
  name: string;
  data: string[][];
}

Create Worksheet based on Data Array

Using the output from the previous script, we can supply an array of worksheet information and with this, determine the number of columns and rows before adding to the default sheet of a new Excel File. We then perform some nice to have formatting, like autofitting columns and create a table, so that the data is now queryable via the native Excel actions. Albeit why would you use list rows when you have Office Scripts?

function main(workbook: ExcelScript.Workbook, 
  MainTable: string = "Table1",  //new table name
  worksheetInformation: WorksheetData) {
  
  // Get default worksheet Sheet1
  let sheet = workbook.getWorksheet(`Sheet1`);

  // Create range based on the size of data 
  let range = sheet.getRangeByIndexes(0, 0, worksheetInformation.data.length, worksheetInformation.data[0].length);

  //Populate sheet with data
  range.setValues(worksheetInformation.data)
  
  //Autofit column width
  range.getFormat().autofitColumns();

  //Create New Table
  let newTable = workbook.addTable(range, true);
  newTable.setName(MainTable); 
}

// An interface to pass the worksheet name and cell values through a flow.
interface WorksheetData {
  data: string[][];
}

A sample Flow

The soltuion to split the workbook into multiple worksheets is all done in one action. Call the first script on the file of your choice and set the key column name, table name and sheet name. If you are looking to perform a split into multiple files, you need to combine the second and third script on an apply to each. I use a compose with the file content of an empty Excel for this and can then create a new file easily. Split the workbook into an array of data sets and then loop through them all to create new files and populate with data using the third script. Note that I inserted a delay of 3 seconds between the create file and run script. Excel can be a bit touchy when it comes to running scripts. Generally scripts will run but if you try to run them in parallel or access the same file continuously, you might get timeouts and retries.

Bulk load data to Excel Using Office Scripts

As another nice bonus, this final script can be used to bulk load data to Excel in a single action, as long as the data size is not greater than the 5MB limit. Potential data sources could be an API, Dataverse or Microsoft List datasets. Compose an array with an array of headers, followed by an array of data per row and you can pass it straight into a new Excel file using the technique above and the third script. A sample array from a Microsoft List can be seen below:

an array of data to load into Excel

If you were looking for a method to create such an array structure, don’t go jumping on an apply to each loop, try something a bit more efficient like a compose (for the header), a select, and a final compose and a union as follows:

You can then pass the array to a newly created Excel File and this data will be populated in the new file almost immediately! You can see a similar method I have demonstrated here.

Office Scripts are a great method of extending Excel integration with Power Automate. The recording tool is a great way of learning what is possible and building sample code and the examples from the Microsoft Team are also incredibly useful for a newbie. I hope that the above article has provided you with enough information to try this yourself. Please let me know below how this has helped you and don’t forget to watch my video demo too.

The post Split a Workbook into Multiple Worksheets first appeared on DamoBird365.

✇DamoBird365

Create an Interactive Power Virtual Agent

Making your Power Virtual Agent more interactive is achievable using Power Automate. When the bot asks the end user a question, it can accept input (entities), pass these to Power Automate and then in return tailor a response, based on the outcome of your flow. In order to demonstrate this, I built a parcel history bot that would query the DHL Tracking API and return to the user the current status and event history for a valid Tracking ID. This method can be re-applied to Microsoft Lists, Dataverse, Connectors and any other 3rd party API. You can watch a demo and see how it was built in more detail below.

Setting up the Power Virtual Agent

You can build a Power Virtual Agents rapidly and deploy via Teams within minutes. It’s possible to build a PVA in Teams under existing Microsoft 365 licensing but it’s worth noting that if you want your bot to go external or beyond the standard connectors in Power Automate, as I do in my demo, you will need to license your organisation with a PVA Subscription.

When you first commission your bot, you have a fully working demo that you could make live and publish immediately. It comes with 4 sample topics to help you understand how the trigger phrases enable your bot to recognise a topic path and how the logic of the auhoring canvas plays out.

Parcel Tracking Bot Topic and Trigger Phrases

Above I established a new Topic on Parcel Tracking and setup 4 phrases around the topic of parcel tracking and DHL. Establishing the process in the authoring canvas needs an understanding of basic logic, but the nodes available to you are quite limited in that you can:

  1. Ask a question
  2. Add a condition
  3. Call an action (i.e. a call to Power Automate)
  4. Show a Message
  5. Or – Redirect to another topic

The key concepts of the authoring canvas are well described here.

The Process Map

My parcel topic will be triggered using the keywords and in response, will welcome the user and ask them for a tracking reference. It then saves the supplied reference into a variable which is then supplied to Power Automate as an Input Variable, in this case I called it trackingref. The flow is created as an action and requires the PVA trigger, with corresponding input variable, the actions in between, and the output. In my case I have two variables as Markdown Tables, the current parcel status and the event history of the Parcel.

Once my Flow has run, the bot will return the two tables back to the user and if no parcel is found, will ask the user via a condition block, if they want to search again. This is done by checking the status variable being equal to “No Match Found” which is logic I built into my flow should no parcel match be found. The PVA is then able to loop back up to the top of the process and ask once more for a valid tracking reference. If a valid parcel is found or the end user doesn’t want to search again, the conversation is ended with a survey.

Chat bot welcome
Calling a flow as an action and returning the response
a condition to check if the parcel was found
Searching again or ending the conversation

DHL Parcel API

In order for the Flow to get the tracking info of the DHL parcel, you will need a developers account which can be setup in 5 minutes. The documentation for the API I use in my demo is available here and simply requires an API key (which they provide when you register an application) and a parcel reference. You will also need premium licensing for a 3rd party API or connector but if you are keen to explore what PVA and Power Automate could do for you, you could consider setting up a Microsoft List and build a flow to retrieve data based on a filter of the get items action

Power Automate

If you want to try this solution out for yourself, the flow, in the form of a Scope, is available to download here. You can create your chat bot and paste the scope into your PVA Flow. I built my initial Flow in the Maker Portal, using a scope, and that allowed me to test the specifics of retrieving data and returning it to a compose action, without having to run my bot. By building the solution in a scope, I could easily copy it into PVA once I was ready.

My flow begins with the PVA trigger and trackingref input, which allows me to accept the input from the end user conversation. I then have a compose with my API key from DHL, a call to the HTTP action with the tracking reference as part of the URL and a header for the API Key. Then I have a condition which is setup to run on success and failure (as the HTTP call will fail if no parcel is found) and it checks to see if the shipment array is null.

If the condition is evaluated true, I simply have a compose with a string in it. This allows me to play out the rest of the flow using the full width of the editor. I do not need to put the rest of my flow in the yes container. In the no, I return the default values back to PVA of “no match found!” and a request to check that the reference was valid. The flow then terminates as succesful and no further actions will be run.

Getting the Status object and Events array involved exploring the flow history for the HTTP action, and potentially you could return more than one parcel if they share the same reference. If you do go down the route of creating a real DHL chat bot, make sure you fully explore the API docs. You will see below I have formed the expressions required to get this data and create my first status table using Markdown.

Two ways of achieving the same thing

Next I really just wanted to demonstrate how you can achieve the same output more efficiently using a select action. Below I am taking the array of parcel event data and repurposing it as an array of object strings.

Sample event array

The apply to each is the more conventional method where we take the array as input, create a string in a compose, and then bring those strings back together to form an array using outputs (Pieters Method). Using select and concat, I can build exactly the same output but in a single api call. This has huge efficiency benefit when processing large arrays.

I then use a compose with a return line to join the array of data to form the table data formatted in Markdown.

The final piece of the flow is the return action, where the output from the flow is returned back to PVA and then presented to the user via the message action. If we have found a parcel, we have two neatly formatted tables, if no parcel is found, we return an appropriate message and ask them if they want to search again.

I created this solution to demonstrate how to take your Power Virtual Agent to the next level. It’s very simple to have a conversation with a chat bot and setup multiple topics to signpost a user but the real benefit of PVA is the dynamic data. How many days holiday do I have? Can I book a day off? Who is in the office today? All of these questions could be answered by the chat bot using Power Automate and a data source via standard connectors to Microsoft 365 or Premium with an external API.

What have you used Power Virtual Agents for?

The post Create an Interactive Power Virtual Agent first appeared on DamoBird365.

✇DamoBird365

Create a draft Email in Outlook

Learn how easy it is to create a draft email in your Outlook Draft folder using Power Automate. Rather than send the email direct from Power Automate, let me show you a simple way to create a draft email. You can review, edit and send this email directly from your Outlook Mailbox. As well as a video to demonstrate how this is possible, I will further extend the concept below and show you how you can include attachments as part of your draft email.

Click here to read up on the Graph API we use to perform this PowerAutomate Flow.

Using the above documentation and the Graph Send an HTTP Request Action, we can create a draft email in one simple action. The content can include dynamic data including title, to, subject, body and of course attachments.

Graph API Draft Email

I haven’t included any copy / paste samples as Microsoft do a good job of this in their documentation. However, where it’s not so good, is an example that includes attachments.

Adding an attachment (from SharePoint)

In order to add attachments to your draft email, you need to contruct an array of attachment(s) objects that are made up as follows:

{
  "@odata.type": "#microsoft.graph.fileAttachment",
  "name": "@{items('Apply_to_each')?['{FilenameWithExtension}']}",
  "contentBytes": @{body('Get_file_content')?['$content']}
}

Note, that you must escape the @ in the attachment object by including two @@ or you will get an error when trying to save the flow.

In my scenario, I am using get files (properties only) action to get a list of files from SharePoint, you could of course filter this or make your file attachment(s) fixed by using get file content action. I use an apply to each to get each file(s) and add them to an object in a compose called Attachment.

Creating the array of attachments for the draft email

To create our object we must include the file name (which can be dynamic content) but also the contentBytes, this is built using an expression body(‘Get_file_content’)?[‘$content’] and must not include double quotes. Once we have our objects in the Attachment(s) compose action, we can bring them all together in an array using the expression outputs(‘Attachment’). This is a neat little trick to create an array, based on the data contained within the compose action(s) in an apply to each.

{
    "subject": "Files?",
    "importance": "Low",
    "body": {
        "contentType": "HTML",
        "content": "Quite a few files are attached!"
    },
    "toRecipients": [
        {
            "emailAddress": {
                "address": "damien@yourtenant.onmicrosoft.com"
            }
        }
    ],
    "Attachments": @{outputs('Attachment')}
}

The above HTTP body demonstrates how you might create a draft email with attachments.

Adding an Attachment (from OneDrive)

Sending file(s) from OneDrive is very simlar. You will need to construct the expression for the file content. The expression I have used for my excel file is outputs(‘Get_file_content’)?[‘body’]?[‘$content’].

Above, I am attaching a single file. Don’t forget about escaping the @ symbol. As this is a single file, we need to form an array by adding [ opening and closing ] square brackets (see below).

Adding the attachment array to the body of the email is exactly the same. You can add it by calling the expression outputs(‘Attachment’) or by using the dynamic content block.

The Drafts

All you have to do now, is have a look in your draft folder. Open up the draft, make any changes and hit send!

Looking to go a bit more complex?

If you are looking to create a draft email containing HTML, a link to a website, a mailto link or maybe a signature, take a look at the example flow below which can be downloaded from my git hub here!

using compose to create HTML components
A signature block and main body in HTML
Create a draft email using Graph API in Power Automate

And here is what your draft email created in Power Automate will look like:

Sample draft email generated using Graph API

Please let me know how you have used this in your own solutions.

The post Create a draft Email in Outlook first appeared on DamoBird365.

❌