Generate Any PDF Documents from HTML with Flow

This is a crazy one, and if you have read ALL my Microsoft Flow blog posts, you'll be familiar with all these pieces.

We are going to connect them a different way though, and the result is still awesome.  Lets begin.

In this blog post:

  • Try to convert image files to PDF
  • Use a workaround to convert image files to PDF
  • Ultimate power: create Any HTML and convert to PDF. 
  • Effectively, we arrive at PDF-gen with templating.

Try to convert Image Files to PDF with Flow

  • Get File Content (Binary) from SharePoint. 
  • Write to OneDrive for Business
  • Use Convert to PDF to convert JPG to PDF.

This doesn't actually work.  But it's good to try and see the error.  Not Acceptable.

This isn't the end though, we have workarounds.

Use a workaround to convert image files to PDF

So even though Convert File doesn't work on image files directly, it does work on HTML.  And this is the heart of our workaround.

Remember several blog posts ago we used dataUriToBinary() to convert PowerApps camera image to a Binary file to store into SharePoint?

Today, we are going backwards.  We are taking a SharePoint image (as Binary), and converting that to dataUri format.  Lets put that in a variable dataUriJPG

We then create another string for <img src=variable(dataUriJPG) />.  You'll need concat() expression to combine the string and variable.

concat('<img src="', variable('dataUriJPG'), '" />')

See, browsers can render images with dataUri directly embedded.  Lets write that to a HTML file (pic.html)

  1. Get File Content from SharePoint
  2. Convert to dataUri string
  3. Concat within an HTML IMG tag
  4. Write to a HTML file
  5. Convert File from HTML to PDF

And hit it again with convert-file to PDF

Ta-da!

jpg-to-pdf-around-result.png

 

End with the ultimate power: create Any HTML and convert to PDF. 

  1. Try multiple headings,
  2. And repeat that image twice.
concat('<html>', '<h1>heading 1</h1>', variables('html'), '<h1>heading 2</h1>', variables('html'), '</html>')

Read and include some live data.

  1. Use SharePoint List Folder action and get a list of all the files
  2. Use Data Operations - Select to remap just the Path and Size properties (this is the same as Array.Map)
  3. Create HTML Table - with automatic columns.  We only have two fields.
  4. Concat into our HTML


Effectively, PDF-gen with templating

In this blog post, we covered some new and old techniques:

  • DataUri is our friend again
  • Convert-File to PDF
  • Select, Create HTML table

I leave the last example as a thought exercise for you, the reader.

  1. Store the HTML template in a HTML file.  You really don't want to be typing HTML in concat within a tiny formula box.
  2. Use Replace expression to replace REPLACE_ME words with values you plan to fill from a live data source.
  3. Insert images as DataUri strings to easily get your logo, headers into the PDF report.
  4. Consider using PDF to snapshot list item upon workflow completion, and then email that as PDF attachment to the manager.

Thank you for reading

I have a favour to ask.  See, I told @paulculmsee sneak peek about this post and he's like oh that's it?
If you think gosh this one's awesome, please tell him he's wrong :-)

 

Two ways to convert SharePoint files to PDF via Flow

This blog post is divided into three sections: The easy, The Auth and The Complete parts.

Microsoft Flow released a new power to Convert Files to PDF.  This made my October.  So of course we have to play with this.

Part 1. The Easy

Now this work well, but raises a few questions: 

  1. Why do I have to copy to OneDrive for Business?
    Because the Convert File action is also available for OneDrive for consumer, but not SharePoint
     
  2. Can I do this without copying to OneDrive for Business
    Not with the default Actions for now.  There's no Convert File for SharePoint Connector.  And SharePoint Connector's Get File Content action doesn't allow a format parameter.
convert-file-actions.png

And this is the simplest solution.

Warning: Next be dragons (Auth and API)

We are going to dive in to see what API this uses.  And whether we can call the same API on SharePoint library document directly without copying the file to OneDrive first.

This next part is good for you.  But it is heavy and will look complicated.  Brace yourselves.

...So what API does this use?

https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_get_content_format

GET /drive/items/{item-id}/content?format={format}
GET /drive/root:/{path and filename}:/content?format={format}

Specifically, this uses the Microsoft Graph

Part 2. The Auth

Disclaimer - OAuth looks familiar, but steps are always tricky.  Easy to mess up.  So if you are following this through, walk carefully.

For the next part, we need to connect to MS Graph with AppOnly permissions

In Azure Portal - under Azure AD - create an App Registeration (I'm reusing a powershell-group-app one I had previously baked)

client-id.png

We will be accessing files - so make sure Application Permissions for read files is granted.  This requires admin consent.

client-perms.png

Via the Azure AD portal - hit Grant Permissions to perform admin consent directly.

client-grant.png

Now we are going to write the Flow with HTTP requests

hit the token endpoint for our tenant with a POST message.  The Body must be grant_type=client_credential with client_id, client_secret and the resource is https://graph.microsoft.com

this request if successful will give us back a JSON.  Parse JSON with this schema:

{
    "type": "object",
    "properties": {
        "token_type": {
            "type": "string"
        },
        "expires_in": {
            "type": "string"
        },
        "ext_expires_in": {
            "type": "string"
        },
        "expires_on": {
            "type": "string"
        },
        "not_before": {
            "type": "string"
        },
        "resource": {
            "type": "string"
        },
        "access_token": {
            "type": "string"
        }
    }
}

This gives Flow a variable for access_token for the remainder of the steps to use to call Microsoft Graph

Test this by calling the MS Graph endpoint for SharePoint site

token-test.png

This HTTP request with the Bearer access_token successfully returns SharePoint site data from Microsoft Graph.

 

Part 3.  The Complete Solution to fetch SharePoint document as PDF

Call /content?format=PDF

get-content-format-redirect.png

A few things going on in this result.  

  1. Flow thinks this request has failed - because it doesn't return a 2xx status.  It returns a 302 redirect.
  2. The Response header contains the Location of the redirect, which is where the PDF file is

Parse JSON again on the Response header.  

{
    "type": "object",
    "properties": {
        "Transfer-Encoding": {
            "type": "string"
        },
        "request-id": {
            "type": "string"
        },
        "client-request-id": {
            "type": "string"
        },
        "x-ms-ags-diagnostic": {
            "type": "string"
        },
        "Duration": {
            "type": "string"
        },
        "Cache-Control": {
            "type": "string"
        },
        "Date": {
            "type": "string"
        },
        "Location": {
            "type": "string"
        },
        "Content-Type": {
            "type": "string"
        },
        "Content-Length": {
            "type": "string"
        }
    }
}

We just want Location.  We also need to configure Continue on previous HTTP error.

redirect-continue.png

And finally, retrieve the file via GET again

fetch-return.png

 

When ran, the flow looks like this:

run.png

 

Summary

The complete solution uses HTTP to call MS Graph directly and pulls back the PDF file after a 302 Response.  This is a fairly complex example so please evaluate whether you want the Correct Way or the Easy Way.

Note also that Microsoft Flow has a Premium connector for Azure AD Requests - which will negate the middle part of this blog post re: Auth and let you dive right into MS Graph REST endpoints without worrying about access_tokens.  

Call this Flow request and it downloads the PDF file, converted from a DOCX document in SharePoint team site.

 

Review Special Techniques Invoked:

  • MS Graph Auth
  • The Continue on Error configuration
  • Parse JSON on Response Header

 

Angular 4, SharePoint On-Premises, localhost development and SP-REST-Proxy

We've been running Angular 4 (via ng-cli) on our SharePoint On-Premises environment for a while, I wanted to just take a short time and jog down our battle notes. 

Especially, as a thank you to @AndrewKoltyakov who built https://github.com/koltyakov/sp-rest-proxy

If there are areas that are unclear, let me know in the comments and I'll try to clarify.

package.json

  "scripts": {
    "ng": "ng",
    "hmr": "ng serve --hmr --environment=hmr --verbose --proxy=proxy.conf.json",
    "debug": "ng serve --environment=hmr --verbose --proxy=proxy.conf.json",
    "prod": "ng serve --env=prod --verbose --proxy=proxy.conf.json",
    "serve": "ng serve -e=source --verbose --proxy=proxy.conf.json",
    "build": "ng build --env=prod --aot ",
    "bundle-report": "webpack-bundle-analyzer dist/stats.json"
  },

We added additional scripts to our package.json.  This just means we can easily switch to different modes without forgetting which arguments we messed up.

"serve" was the basic one that says run localhost.  We don't use this one now as much, as we love hot module reloading (hmr).

We use "--proxy" to set up Angular/Webpack-Dev-Server's proxy settings via a separate proxy.conf.json file.

https://github.com/angular/angular-cli/blob/master/docs/documentation/stories/proxy.md

"debug" was the same as "prod" except it doesn't have Angular's production flag.  This makes things faster somehow.  In reality, if you want Angular to be blinding fast, use --aot

"hmr" is nearly the same as "serve, and runs out of localhost".  Use the Angular --proxy 

"build" compiles everything with --prod and --aot.  Does not run locally.

We use different Angular environment settings to set up mock proxies, and apply a slightly different header CSS so we know at a glance which environment we are in.

I'm going to hear the question: Why so many different variations?!  That's so confusing.

Well, they are all different.  And nobody can decide which one is the best for which scenario.  So we keep writing new ones!  

Deal with it :-)

environment.ts

A quick note on our Angular environment, before we get into the proxy configurations.

// environment.hmr.ts
export const environment = {
  production: false,
  source: true,
  hmr: true,
  mock: require("../app/core/testData.json"),
  jquery: require("jquery")  
};

Depending on which --env=hmr is used, the corresponding environments/environment.hmr.ts is loaded.  

We put variables that affect different code execution in this environment file.  We also find this to be a good place to load big mock json files.

Sometimes you want to run the application locally and you aren't in office, so even the proxy won't work - we will then fall back to local mock json data sources.

hot module reloading (HMR)

HMR needs a separate blog post to describe it.  We followed this:

https://medium.com/@beeman/tutorial-enable-hrm-in-angular-cli-apps-1b0d13b80130#.2p0n6oo34

// main.ts
const bootstrap = () => {
  return platformBrowserDynamic().bootstrapModule(AppModule);
};

if (environment.hmr) {
  if (module['hot']) {
    hmrBootstrap(module, bootstrap);
  } 
  else {
    console.error('HMR is not enabled for webpack-dev-server!');
    console.log('Are you using the --hmr flag for ng serve?');
  }
} 
else {
  bootstrap();
}

When HMR is enabled via environment variable, we have a slightly different bootstrap mechanism.

proxy.conf.json

{
    "*/_api/**": {
        "target": "http://localhost:8080",
        "secure": false,
        "changeOrigin": true
    },
    "/style%20library/**": {
        "target": "http://localhost:8080",
        "secure": false,
        "changeOrigin": true
    },
    "/Style%20Library/**": {
        "target": "http://localhost:8080",
        "secure": false,
        "changeOrigin": true
    },
    "*/sp2016/**": {
        "target": "http://localhost:8080",
        "secure": false,
        "changeOrigin": true
    }
}

We re-route several localhost calls in Webpack-Dev-Server to the SP-REST-Proxy  
We also send relative URL asset requests through SP-REST-Proxy.

SP-REST-Proxy

// serve.js
const RestProxy = require('sp-rest-proxy');
const path = require('path');
const settings = {
    configPathpath.join(__dirname'/_private.conf.json'), // Location for SharePoint instance mapping and credentials 
    port8080,                                              // Local server port 
    staticRootpath.join(__dirname'/static')                 // Root folder for static content 
};
 
const restProxy = new RestProxy(settings);
restProxy.serve();

This is the main starting server.js 

{
    "siteUrl": "http://sp2016",
    "domain": "SPG",
    "username": "jliu",
    "password": "********" 
}

This is _private.conf.json, everything is routed to SharePoint On-Premises as me.

Images and CSS we place in a static subfolder which mirrors the SharePoint root web style library.

\static
    \style library
        \cloud.jpg
        \main.css
\server.js
\_private.conf.json

Start SP-REST-Proxy and it will bind all localhost:8080 calls over to SharePoint, or to its static file system.

localhost development

And that's pretty much how we set up Angular 4, WebPack-Dev-Server (with --proxy), SP-REST-Proxy, various different environment variables and wire everything to different npm run scripts.

Our main favourites:

npm run hmr
This option runs localhost, with SP-REST-PROXY to real on-premises server.

npm serve
This option runs localhost with mock data.  Also uses SP-REST-PROXY for static resources.  But Angular data services does not make real calls - just mock ones.

npm build
This option builds with -production and --aot
We chain this with SP-SAVE to upload this into our on-premises development environment.

npm run bundle-report
This runs a bundle report check and is fun eye candy to help us understand what the hell got webpacked.
 

Gaps between PowerBI streaming tiles and SharePoint

So I spend an evening playing (I actually have a lot of fun exploring these things) and figuring out how the pieces of SharePoint, PowerBI and Flow are supposed to work together.

In my head - they already connect.  But I have never seen anyone blog them.  So I decided to give it a stab.

Turns out there are some gaps.

The Idea

The Idea is simple.  We can create a PowerBI that uses SharePoint List as a datasource.  But instead of configuring scheduled refresh, we want to use the PowerBI Rest Dataset to push data in a streaming way.  And since Microsoft Flow has an action to do this, as well as the triggers to listen to SharePoint List.  We can get SP List-push-to PowerBI without needing schedule refresh.  That is a crazy fun idea.

The Reality

The reality is that there are several gaps.  These are probably solvable, but I just want to list them first, and we'll tackle them in the future.

Gap 1.  SharePoint List dataset != Push-enabled REST dataset

PowerBI makes a distinction between what's a REST/Pushable Dataset vs normal datasets like external lists.  In fact, Flow can not connect to a non-REST dataset.

So we need to create a REST dataset in PowerBI Service (it is not a feature of PowerBI Desktop), and then use the REST dataset as a live connection in a PowerBI Report.

Gap 2.  The only way to create a PowerBI REST dataset is via the REST API. 

There is no UI.  Ouch.  That pretty much makes this a developer task.  OK that's fine, we create a REST dataset via REST endpoint and a JSON schema (double ouch).  

Now we can build our PowerBI report, connect the REST dataset from PowerBI Service.  We save and publish this report to PowerBI Service and then insert the PowerBI Report in an SPFx webpart (PRO license needed for embed) into a SharePoint modern page.

This part is actually really seamless.  Don't worry, we have more gaps.

Gap 3. PowerBI report does not livestream REST dataset results.  

So I'm staring at my PowerBI visual in a SharePoint modern page.  In a separate window, I update the source SharePoint List.  In yet another separate window, I can see the Flow ran and push the new list item into the streaming dataset.

Excellent.  Except, the SPFx PowerBI Report Visual isn't updating.  It doesn't update.  I waited 15mins for it to do nothing!

If I F5, then I immediately see the new value.  But it doesn't do live streaming refresh :-(

It turns out, to see live stream results we need a PowerBI Dashboard or PowerBI Tile (streaming tile).

PowerBI Dashboard can only be created in the PowerBI Service.  We take an existing report and pin the visual.  This asks us to add the visual as a tile in a PowerBI Dashboard.

Gap 4. SPFx PowerBI report webpart does not show PowerBI dashboard embed.

So I create a PowerBI Dashboard and I go back to the SPFx PowerBI Preview webpart.  Only to find it doesn't do dashboard embed.

It only does Report embed.  So we will need to build our own SPFx that let us do dashboard embed.  This requires a embed token from MSGraph - but we should be able to piggyback the graph-util helper in SPFx to do our token exchange.

There's potentially one more issue.

Gap 5.  Does embed PowerBI Dashboard or Tile actually connect to streaming datasets?

I don't know the answer to this yet.

Gap 6. PowerBI REST Dataset endpoint can only add rows, not update them

The REST API lets us add rows to a REST dataset easily, or clear the table.  But there's no way to update an existing row.

The use case for the streaming REST dataset is like a ongoing stock ticker or temperature meter.  You don't update a record that's streamed past.  You only care about new records.

Flow only has an action to add row to PowerBI Dataset.

Gap 7. Flow does not have an action to Clear the dataset

The REST API lets us clear the dataset table, so technically, I could clear the table each time and repopulate it with the entire list again.

But unfortunately, Flow only has one PowerBI Action - add row to a REST dataset.  It does not have a clear rows action.

More work to do, more exploration to be had

Parts of the puzzle works really well.  Flow pushes data freely from SharePoint list changes into the streaming dataset.  If the dashboard or tile is shown on a webpage by itself, we immediately see it update like magic.

But if we want the dashboard/tile embedded within a SharePoint modern page.  There's still work to be done.

Ultimately, if we want live streaming data capability, it might be easier to use PowerBI LiveQuery against Azure SQL, and have Flow push data into that, instead of PowerBI REST streaming Dataset.

 

 

 

Auto-Classify Images in SharePoint Online library via Flow for Free

Microsoft Flow's most recent update added ability to query and update SharePoint File property.  This is actually really timely, as I wanted to combine this with a few other techniques and built a Document Library Image Auto-Classifier Flow.

Is that a clickbait headline?  Well it's totally real, and we'll build it in a moment.

result-1.png

Steps:

  1. Set up your cognitive service account (understand the free bucket)
  2. Set up a SharePoint Online document library with Categories
  3. Set up the Flow file loop
  4. Do a fancy JSON array to concatenated string projection operation with Select and Join
  5. Viola, no code.  And pretty much *free*

This is part of a series on Microsoft Flow

Set up your Azure Cognitive Service instance

Follow these simple steps to create an Computer Vision API Cognitive Service in your Azure.  Computer Vision API has a free tier.

1. Create Computer Vision API

1. Create Computer Vision API

2. Scroll down and hit Create

3. Give this service a name, set up the region and select Free pricing tier

4. You need the endpoint url here

4. You need the endpoint url here

5. Also, copy the Name and key 1

5. Also, copy the Name and key 1

You will need the "Name" and a "Key" for the next step.

The free tier of Computer Vision API - first 5000 transactions free per month.

Note the service isn't available in all regions.  Most of my stuff is in Australia East, but for the Cognitive Service API it has to be hosted in Southeast Asia.  YMMV.

Then we need to set up the connection in Flow

1. Find the Computer Vision API action

1. Find the Computer Vision API action

2. Enter service name, key and the root site url to set up the initial connection

3. Created correctly, you get an action like this

 

Set up the SharePoint Document Library

My SharePoint document library is very simple - it is just a basic document library, but I added an extra site column "Categories". This is an out of the box field, and is just a simple text field.

This is a simple step

This is a simple step

Set up the Flow

I trigger the flow with a Scheduled Recurrence that runs once per day.
Using the new Get Files (properties only), I grab a list of all the files in a document library.
I then run for-each on the list of files.

Inside the for-each, I have a condition that checks if the Categories field is null.  If you type null directly into the field, you will get the string 'null'. 

Tip: To actually get the formulat/expression null, select Expressions and type null there.

If the Categories is null, then we proceed.

Grab the file content via Get file content
Call Computer Vision API with the image content.  Select the Image Source to binary, instead of URL.

Tip: I use a compose to see the debug results

I'll explain the array projection in the next section.

Select projection: JSON array to String array

We have an array of JSON objects:

[{
     'name': 'foo'
},
{
    'name': 'bar'
}]

flow-project-1.png

This default UI maps to:

tags -> [{ specified properties }…]

The result is that we would end up with a new array of (simpler) JSON objects.
Hit advanced text mode.

flow-project-2.png

Here, we can use Expression to say item('Tag_Image')?.name

flow-project-3.png

In this case the UI is smart enough to show Tag.Name as a dynamic content (as well as the Tag.ConfidenceScore property).  So we can select that.

This performs a projection of

tags -> [ names… ]

We now have an array of strings.  Combine them via Join with a comma (,) separator.
Update the file properties with this string.

flow-project-4.png

Lets see the results

I uploaded a few images to the library.
Note the categories field is blank.

result-2.png

Running the Flow

When it finishes, I'm checking the JSON - the picture is identified with a "person" with 99% confidence.
The combined string "person,young,posing" is updated into the File property.

The documents are updated.  When Flow runs tomorrow it will skip them.

 

The Final Flow