Technical Topics (english) Archive | THE SELF-SERVICE-BI BLOG Wir lieben Microsoft Power BI Mon, 16 May 2022 13:33:32 +0000 de hourly 1 https://wordpress.org/?v=6.7.2 https://ssbi-blog.de/wp-content/uploads/2019/10/Favicon-150x150.png Technical Topics (english) Archive | THE SELF-SERVICE-BI BLOG 32 32 Change axis and measure in a visual by clicking a slicer https://ssbi-blog.de/blog/technical-topics-english/change-axis-and-measure-in-a-visual-by-clicking-a-slicer/ https://ssbi-blog.de/blog/technical-topics-english/change-axis-and-measure-in-a-visual-by-clicking-a-slicer/#respond Mon, 16 May 2022 13:33:32 +0000 https://ssbi-blog.de/?p=13668 Microsoft has released the „field parameters“ feature in the latest Power BI Desktop. This allows the user to change the measures OR dimensions of a visual without further modeling by clicking on a slicer. I introduce this feature in this post and show how you don’t have to choose measures OR dimensions, but can have […]

Der Beitrag Change axis and measure in a visual by clicking a slicer erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
Microsoft has released the „field parameters“ feature in the latest Power BI Desktop. This allows the user to change the measures OR dimensions of a visual without further modeling by clicking on a slicer. I introduce this feature in this post and show how you don’t have to choose measures OR dimensions, but can have BOTH at once 🙂

What are Field Parameters?

As of today (May 16th, 2022) Field Parameters are a preview feature in Power BI Desktop. It allows users to change the measure or the dimensional attributs within a  report, by clicking on a slicer. Because it’s still in preview, make sure you enable it under Options –> Preview features –> Field Parameters.

What are they helpful with?

By clicking on a slicer you can determine which axis label your line chart has, which row or column header your matrix visual has, or which measure is displayed in your visualization. This way you can use the limited space on your report more than once and – if used cleverly – integrate more analysis options into your reports.

 

Field Parameters – Solution video

You can download the file here

Cheers from Germany,

Lars

Der Beitrag Change axis and measure in a visual by clicking a slicer erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
https://ssbi-blog.de/blog/technical-topics-english/change-axis-and-measure-in-a-visual-by-clicking-a-slicer/feed/ 0
Loading multivalued fields from Microsoft Access – not possible with Power Query https://ssbi-blog.de/blog/technical-topics-english/loading-multivalued-fields-from-microsoft-access-not-possible-with-power-query/ https://ssbi-blog.de/blog/technical-topics-english/loading-multivalued-fields-from-microsoft-access-not-possible-with-power-query/#comments Mon, 13 Dec 2021 09:44:44 +0000 https://ssbi-blog.de/?p=12355 Unlike most of my articles, this one doesn’t solve a problem, it highlights one. If you use Microsoft Access as a data source to import data via Power Query, this article should make you aware of a limitation when importing so-called ‚multivalued fields‘. What are multivalued fields in MS Access? ‚Multivalued fields‘ are a special […]

Der Beitrag Loading multivalued fields from Microsoft Access – not possible with Power Query erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
Unlike most of my articles, this one doesn’t solve a problem, it highlights one. If you use Microsoft Access as a data source to import data via Power Query, this article should make you aware of a limitation when importing so-called ‚multivalued fields‘.

What are multivalued fields in MS Access?

‚Multivalued fields‘ are a special feature of MS Access (SQL Server does not have this feature). These allow to store not only one value but up to 100 values in one field of a single record. Click here for examples and to learn how to create them.

What if you want to import those fields with Power Query?

I don’t want to drag this post out unnecessarily. As you can see from the title, there is no way to import ‚multivalued fields‘ from MS Access using Power Query. These fields are simply ignored during import.

Power Query, Trying to import 'multivalued fields' from MS Access via Power Query fails
Trying to import ‚multivalued fields‘ from MS Access via Power Query fails

Because I didn’t know if I was simply missing something here – e.g. an unknown parameter of the Access.Database() function – I contacted Curt Hagenlocher of the Power Query team. He confirmed that there is ‚no way to import values of this type‚ using Power Query. Thank your Curt, for this statement.

Conclusion

Since I still see it relatively often in the self-service area that MS Access is used as a tool for data entry and processing, my advice at this point is: If you develop the Access database yourself (i.e. you can design it according to your own ideas) and can already foresee that it will be accessed via Power Query, avoid the use of ‚multivalued fields‘. If you want to know how to replace ‚multivalued fields‘ with an adequate modeling, this article will show you how.

Since we are already in the middle of December: Have a great Christmastime and a happy new year,
Lars

Der Beitrag Loading multivalued fields from Microsoft Access – not possible with Power Query erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
https://ssbi-blog.de/blog/technical-topics-english/loading-multivalued-fields-from-microsoft-access-not-possible-with-power-query/feed/ 1
How to speed up metadata translations in Power BI https://ssbi-blog.de/blog/technical-topics-english/how-to-speed-up-metadata-translations-in-power-bi/ https://ssbi-blog.de/blog/technical-topics-english/how-to-speed-up-metadata-translations-in-power-bi/#comments Sun, 04 Jul 2021 19:57:27 +0000 https://ssbi-blog.de/?p=9952 At the time of writing, the Power BI service supports 44 different languages. In this article I will show you how you can significantly reduce the effort to translate your tables, columns and measure, while you can do the actual translation in tabular form in an Excel file. Expectation management To clear up any misunderstandings […]

Der Beitrag How to speed up metadata translations in Power BI erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
At the time of writing, the Power BI service supports 44 different languages. In this article I will show you how you can significantly reduce the effort to translate your tables, columns and measure, while you can do the actual translation in tabular form in an Excel file.

Expectation management

To clear up any misunderstandings right at the beginning: No, my tool does not translate your data model by itself. You have to translate the names of tables, columns and measures yourself. But in this post I provide you with a practical solution that reduces your manual effort in Tabular Editor to a minimum after you have done the translation of the objects. You don’t have to create the individual languages (aka „cultures“) manually and then right click and rename each individual table, column and measure with a mouse click. My tool does that for you. There is already a well-documented solution to this problem by Kasper de Jonge, but it boils down to editing JSON files. However, if you – like me – prefer editing in tabular form in an Excel file, then my current post should help you. In addition, there is an excellent article from the Tabular Editor team, from whom I learned a lot about how the process works manually.

Requirements

In my attempt to find a solution for Excel-based translations, I asked Daniel Otykier – the creator of Tabular Editor – how it is possible to programmatically translate the objects of the data model via C#. Daniel was then kind enough to point me to the macro recorder in Tabular Editor 3 to see how to create languages and do translations for the individual objects via C# script. TE3 is an amazing tool. If you can afford it, buy it! Anyway: Because TE2 is free (download it here), I’ll show you how to do it with TE2.  To follow my solution path two requirements must be met:

  • You need a version of Tabular Editor (2 or 3) and
  • Your data model must be in a premium workspace (premium per capacity or per user) otherwise your translations in Power BI Desktop won’t reach the service.

Before we look at how the whole thing works, a few words about the available languages.

Available languages – or: How to translate into Hebrew and Arabic

When you save a Power BI Desktop file for the first time, the file remembers the initial model language (or default language), which you can determine as follows:

Defining the default language of the model in Power BI Desktop
Defining the default language of the model in Power BI Desktop

This list contains 42 different languages.

If you go into the Power BI service, you can change the language of the UI there as follows:

Switching languages in the Pwer BI service
Switching languages in the Pwer BI service

This list contains 44 languages, 2 more than the list in Power BI Desktop. What is the difference? The Power BI service additionally contains the two languages Arabic and Hebrew. The question why these two languages are missing in Power BI Desktop can be answered quickly by selecting one of the two languages in the Power BI service (here using Hebrew as an example):

Power BI service with language Hebrew, Power BI
Power BI service with language Hebrew

Here, everything suddenly looks upside down. In Arabic it is the same. While this is implementable in a web application like the Power BI service, I guess it would have meant significantly more effort for Power BI Desktop as a desktop application. But that is just a guess.

What my tiny Excel tool does

At the end of this article you will find a video in which I show you how to use the Excel file. Nevertheless, I summarize here briefly the steps that can be implemented with the Excel file.

1. Load tables, columns and measures from the model into the excel file

In order to translate the individual objects within Excel, I first have to get them in the initial model language in my Excel file. So that I don’t have to do this manually, I have integrated a Power Query solution that, based on the current process ID of the open Power BI Desktop file, reads these into the sheets ‚Translation – Tables‘, ‚Translation – Columns‘ and ‚Translation – Measures‘.

2. Get the right culture code for your language.

Microsoft has a detailed documentation about which languages are supported in Power BI. Unfortunately, I only noticed this after I started a discussion on Twitter to help me identify the languages (thanks to everyone who participated 😉 ). What is unfortunately missing in this documentation is the corresponding ‚Culture Code‘ of each language, because I need this when I want to create a translation in the data model. An example: For the language ‚Chinese (Simplified)‘ there are – as far as I know – 4 different culture codes: „zh-Hans“, „zh“, „zh-CN“, „zh-SG“. „zh“ is recognized by the Power BI service, „zh-HK“ for example is not. Therefore, I searched for a working Culture Code for each of these 44 languages and checked it against the Power BI service.

Choosing a language and get a working Culture Code in return, Power BI service
Choosing a language and get a working Culture Code in return

You simply select the desired language via the dropdown (the dropdown offers the language in English and in the original language, which can be very helpful) and the Culture Code then results by itself. My tool is made for up to 5 translations, but you can extend that, if you will.

3. Translate tables, columns and measures in an Excel sheet

After loading the objects of the data model into the Excel file and defining the up to 5 languages you want to translate into, you can perform the translation in tabular form in the sheets ‚Translation – Tables‘, ‚Translation – Columns‘ and ‚Translation – Measures‘.

Doing the actual translation of tables, columns and measure in Excel, Power BI
Doing the actual translation of tables, columns and measure in Excel

4. Get the C# code you need for tabular editor

Now that the translations are in Excel, I still need to get them into the data model. For this, my tool generates C# code in 2 places, which I can simply paste into the Advanced Scripting window of Tabular Editor.

  • Place #1: Sheet ‚Create Cultures‘ → you will find in column E the code to create the cultures in Tabular Editor.
Create Cultures quickly by using C# code in Tabular Editors' Advanced Scripting window, Power BI
Create Cultures quickly by using C# code in Tabular Editors‘ Advanced Scripting window
  • Place #2: Sheet ‚ALL TRANSLATIONS‘ → This table is based on a power query that combines the translations from the sheets ‚Translation – Tables‘, ‚Translation – Columns‘ and ‚Translation – Measures‘. Update this query and paste the content of the first column into the Advanced Scripting window of Tabular Editor and your translations are done 🙂
Translate the objects of your data model quickly by using C# code in Tabular Editors' Advanced Scripting window, Power BI
Translate the objects of your data model quickly by using C# code in Tabular Editors‘ Advanced Scripting window

Download

You can download my Excel file here.

Known Limitations

If your data model changes – e.g. you add measures and now also want to translate these – then reloading the tables, columns and measures can lead to the fact that the translations already made are no longer in the correct row.

The new measure 'Summe Umsatz YTD' moves the old 'Summe Umsatz' one row down, so that the translation of the same row does not fit.
The new measure ‚Summe Umsatz YTD‘ moves the old ‚Summe Umsatz‘ one row down, so that the translation of the same row does not fit.

Video

Cheers from Germany,

Lars

Der Beitrag How to speed up metadata translations in Power BI erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
https://ssbi-blog.de/blog/technical-topics-english/how-to-speed-up-metadata-translations-in-power-bi/feed/ 2
Let your users know how up-to-date their data is https://ssbi-blog.de/blog/technical-topics-english/let-your-users-know-how-up-to-date-their-data-is/ https://ssbi-blog.de/blog/technical-topics-english/let-your-users-know-how-up-to-date-their-data-is/#comments Fri, 27 Nov 2020 16:20:55 +0000 https://ssbi-blog.de/?p=9655 Before an analyst can start analyzing the data, first it is necessary to know whether the data is up-to-date or not. There are (at least) two pieces of information relevant: 1) When was the data model last updated and 2) Is the data in the data source up to date. With this post I want […]

Der Beitrag Let your users know how up-to-date their data is erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
Before an analyst can start analyzing the data, first it is necessary to know whether the data is up-to-date or not. There are (at least) two pieces of information relevant: 1) When was the data model last updated and 2) Is the data in the data source up to date. With this post I want to show how I like to let my users know how up-to-date their data is.

How I like to present this information

I think the information when the data model was last updated is probably the most important. I like to place this information directly on the report so that it is immediately visible and, in case the report is printed, it is also included on the print (…yes, many customers want to be able to print their reports 😐 ). Since this information is only needed in the second place and takes up more space in case of a data model with several fact tables, I provide this information via a visual header tooltip. This tooltip can be called up when this information is needed and thus does not constantly take up space on the report. Here I provide three pieces of information:

  1. The name of the table +
  2. the date column, on the basis of which I determine the max date and
  3. the actual maximum date

The animated GIF below shows how I provide this information:

Latest model refresh & max dates of the fact tables at a glance, Power BI
Latest model refresh & max dates of the fact tables at a glance

If you are now wondering how this information gets into the report, read on.

Latest data model refresh

There are several approaches to store and report the lastest update of the data model. I present 3 versions and all those versions are based on a Power Query query, which provides the interesting timestamp , which is then loaded into the data model as a disconnected and invisible single-column, single-row  table.

Version #1: DateTime.LocalNow()

You can find several solutions for this topic which use the M-function DateTime.LocalNow() to load a table which stores the current time at every refresh of the data model. I blogged about it in 2017 (in German) and here you can find a similar version from Kasper de Jonge. The problem with this approach is the following: The datetime value returned by DateTime.LocalNow() depends on the system of the computer on which the M script is executed. Which computer this is is sometimes not clear at first sight. Here are a few scenarios as an example:

You publish manually from Power BI Desktop to the Power BI service

DateTime.LocalNow() returns the datetime value of the operating system on which Power BI Desktop runs.

Scheduled refresh based on cloud data sources (no Gateway involved)

In case you are working with cloud data sources, the M-script is executed on the server in the Microsoft data center, where your Power BI datasets are located. In my case this is a data center in Ireland. Ireland is not in the same time zone as Germany. Therefore DateTime.LocalNow() returns a different value here than e.g. on my local laptop.

Scheduled refresh based on on-premises data sources (Gateway involved)

If you automate the data update to the Power BI service through an on-premises data gateway, all M-scripts are executed in the gateway before they pass through the network in compressed form to the Power BI service. Thus, the datetime value returned in this case depends on the system on which the gateway is installed.

There are certainly other scenarios. What these 3 points should show is that DateTime.LocalNow() is probably not the safest method to get the desired result. Let’s look at alternatives.

Version #2: DateTimeZone.FixedUtcNow() to the rescue

An alternative approach would be to use the function DateTimeZone.FixedUtcNow() in combination with DateTimeZone.SwitchZone() like so: DateTimeZone.SwitchZone( DateTimeZone.FixedUtcNow(), 1). While DateTimeZone.FixedUtcNow() displays the current time in ‚Coordinated Universal Time‘, the second parameter of the DateTimeZone.SwitchZone() function lets me shift the UTC time by x hours (and also minutes if needed). This sounds like a great alternative to DateTime.LocalNow() but the devil is in the details.

Since I live in Germany and we are still jumping diligently between summer time (aka Daylight savings time) and winter time here, in my case this difference to UTC time cannot be static, but has to move between 1 and 2 depending on the date. I have already blogged about this here in German.

So this approach leads definetely to a useful result, but instead of calculating the – sometimes changing – difference to UTC time, it would be nicer if you could simply specify the desired time zone and get the datetime value, wouldn’t it? In this case I use REST API calls.

Version #3: Calling a REST API

Instead of worrying about how to calculate the difference between UTC and my own time zone, you could use a web service of your choice. The following script calls a REST API that returns the current date and time in the desired time zone (passed as parameter), which in my case is CET – Central European Time.

Whichever of the above methods you choose to save the time of the lastest data update: The result will be a single-column and single-row disconnected table in the data model, which you will probably hide there.

Latest model refresh as disconnected single-row/ single-column table in the data model, Power BI
Latest model refresh as disconnected single-row/ single-column table in the data model

Putting this info on the report

To put this info into a report, you should put the datetime value into a DAX-measure, so that you can control the formatting (including line breaks). The following DAX statement does the job:

LastRefresh =
„Latest model refresh: „ & UNICHAR ( 10 )
    FORMAT ( VALUES ( ‚LatestRefresh'[timestamp] )„dd/mm/yyyy hh:mm:ss“ )

UNICHAR(10) creates the line break and the FORMAT() function makes sure I can format the datetime value as I need it. Put this measure in a card visual and you’re good to go. Now I make sure that the transaction data in the fact tables is up to date.

Last record of the transaction tables (aka fact table)

In case the data sources doesn’t already provide the data in the necessary structure, I am a strong advocate of doing all ETL tasks in Power Query, even if DAX could partially do the same tasks. Therefore, there are almost never columns or tables calculated with DAX in my data models. In the current case, this is different. My data model has 3 fact tables. In order to know if my data sources contain current data, I select the corresponding date columns from the fact tables, which I want to check for actuality. Normally these are date columns which are related to the calendar table. Doing this in Power Query could take muuuuch longer than doing it with DAX, so the decision is easy.

The fact table in which the maximum date values are to be checked, Power BI
The fact table in which the maximum date values are to be checked

To create a table using DAX, which gives me the necessary information, I use the following DAX statement:

MaxDatesFactTables =
{
     ( „PnL_ACT (column: ‚Investment date‘ )“CALCULATE ( MAX ( PnL_ACT[Invest date] )ALL ( PnL_ACT ) )3 ),
     ( „PnL_PLAN (column: ‚date‘)“CALCULATE ( MAX ( PnL_PLAN[date] )ALL ( PnL_PLAN ) )2 ),
     ( „PnL_Forecast (column: ‚date‘)“CALCULATE ( MAX ( PnL_FC[date] )ALL ( PnL_FC ) )1 )
}

The result looks as follows:

Checked table + column, max date & sort column, Power BI
Checked table + column, max date & sort column

The three columns contain the following information:

  • Value1: Name of the fact table + info, based on which column the actuality of the data was determined.
  • Value2: The maximum date of this specific column
  • Value3: Using this value I can sort the table in the report.

Now let’s take a look at how I prefer to present this info.

This is how I prefer to present this information

I think the info, when the dataset had it’s latest refresh is important almost everytime… that’s why I include it on every page… but what about the latest dates in each fact table? This can be important, but I hardly need this info every single time I take a look at a report page… This is where visual specific tool tips become handy… I create a tooltip page (see the official documentation to know how that works) on which I display the disconnected table „MaxDatesFactTables “ as a matrix. Since the table created in DAX has the headings value1, value2 and value3, I overwrite them in the column captions of the matrix visual: value1 becomes Table and value2 becomes Max Date.

the on-demand tooltip, Power BI
the on-demand tooltip

To place this tooltip in the visual header of the card visual, I go to the format properties of the card visual in which the LatestRefresh is located.

Set up the visual header tooltip, Power BI
Set up the visual header tooltip

I turn on the visual header (if not already done) and activate the ‚Visual header tooltip icon‘. After that a new menu item appears in the format properties: ‚Visual header tooltip‘. Here I select my tooltip page in the field ‚report page‘. Done! 🙂

Additional information in the Power BI service: When was last uploaded?

In addition to the lastest refresh of the data model and the actuality of the data in the data source, the question sometimes arises as to when the data was last uploaded to the Power BI service. If you update your data via scheduled refresh, the time of the last model refresh and upload to the Power BI service will be (nearly) identical. However, if you manually publish your data to the Power BI service (via Power BI Desktop), then model refresh and publishing to the Power BI service are two processes that can vary greatly in time. This information is available to every user directly in the report.

See when the data was last updated in Power BI service, Power BI
See when the data was last updated in Power BI service

And as mentioned at the beginning: Even if you automatically update your Power BI dataset (and thus the lastest model refresh and the upload to the Power BI service have identical times): If you want to print the report, you have to put the info on the report yourself 😉

Cheers from Germany,

Lars

Der Beitrag Let your users know how up-to-date their data is erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
https://ssbi-blog.de/blog/technical-topics-english/let-your-users-know-how-up-to-date-their-data-is/feed/ 1
Fast table construction with Table.FromColumns and lists with metadata records https://ssbi-blog.de/blog/technical-topics-english/fast-table-construction-with-table-fromcolumns-and-lists-with-metadata-records/ https://ssbi-blog.de/blog/technical-topics-english/fast-table-construction-with-table-fromcolumns-and-lists-with-metadata-records/#comments Thu, 03 Sep 2020 15:05:03 +0000 https://ssbi-blog.de/?p=9571 One of the fastest methods I know to create tables in Power Query/ M is the function Table.FromColumns(), which takes a list of lists and creates a table from them. This way you can easily create a calendar table for example. In this article I will show you how you can use the metadata record […]

Der Beitrag Fast table construction with Table.FromColumns and lists with metadata records erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
One of the fastest methods I know to create tables in Power Query/ M is the function Table.FromColumns(), which takes a list of lists and creates a table from them. This way you can easily create a calendar table for example. In this article I will show you how you can use the metadata record of list items to flexibly assign both column names and datatypes to the created table.

Take a look at the following M script. At the beginning I define 4 lists: A list of…

  • dates,
  • day names,
  • month names and
  • month numbers.

The ListOfColumns is basically a compilation of the lists defined above, which are to be included in the final table. Here I can flexibly determine 1) if they should be included and 2) in which order they should be placed.

The final Step GetTable creates a table from the lists defined in ListOfColumns.

The problem with this solution is that the final table has neither data types nor column names.

Final table WITHOUT proper column names and data types, Power Query, Power BI, Excel
Final table WITHOUT proper column names and data types

A possible solution is to define in the ListOfColumns list not only the lists to be included in the table, but also the column name that the list is to receive in the table and the later data type. I define this via the metadata record (To be seen in code lines 16 to 23):

In line 28 I use the Value.Metadata() function to retrieve the ColName information from the metadata record of each list item and get a list of column names.

In line 31 I not only create the table, but also use the second argument of the Table.FromColumn function to define the correct column names.

From row 34 on I assign the correct data types to the columns, which I had stored in the metadata record of the list items. The result is a table that can be created flexibly and that shows both column name and correct data types:

Final table with proper column names and data types, Power Query, Power BI, Excel
Final table with proper column names and data types

Cheers from Hamburg, Germany

Lars

Der Beitrag Fast table construction with Table.FromColumns and lists with metadata records erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
https://ssbi-blog.de/blog/technical-topics-english/fast-table-construction-with-table-fromcolumns-and-lists-with-metadata-records/feed/ 6
3 ways for sums over columns in Power Query https://ssbi-blog.de/blog/technical-topics-english/3-ways-for-sums-over-columns-in-power-query/ https://ssbi-blog.de/blog/technical-topics-english/3-ways-for-sums-over-columns-in-power-query/#comments Wed, 10 Jun 2020 14:10:39 +0000 https://ssbi-blog.de/?p=9315 Normally I calculate sums in Power Query over rows. Recently, however, I was given the task of calculating sums over columns. I wrote a German-language post about this here and used mainly functionalities of the UI. Power Query guru Bill Szysz has commented this post (even though it was in German) and sent me another […]

Der Beitrag 3 ways for sums over columns in Power Query erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
Normally I calculate sums in Power Query over rows. Recently, however, I was given the task of calculating sums over columns. I wrote a German-language post about this here and used mainly functionalities of the UI.
Power Query guru Bill Szysz has commented this post (even though it was in German) and sent me another alternative solution via email. He gave me his permission to share this solution with you here. In addition, I already had a conversation with my friend and Power Query guru Imke Feldmann about another alternative solution, which I have also included in this article. Happy learning 🙂 You can download the sample file here.

The goal

The goal is to create a row at the end of the table that represents the total for the month columns and contains the row label ‚Total‘ in the first column.

Alternative Solutions

Imke and Bill have sent me their solutions, but the comments of the source code are from my pen. With this I wanted to make it easier for you as a reader and I hope that I didn’t make any mistakes. If you find any mistakes here, it’s on me.

Solution #1 – by Bill Szysz

Solution #2 – by Bill Szysz

Solution #3 – by Imke Feldmann

Conclusion

There are many roads to the solution. These 3 here, seem very creative to me and show quite different ways to solve the problem. With the available (relatively small) amount of data, the 3 methods seem to be almost equally fast. Of course this would have to be tested with larger amounts of data, but that’s not what this post is about :-). I thank Imke and Bill a lot for their solutions. I have learned a lot. Which of them do you find the most exciting?

Cheers from Hamburg, Germany

Lars

Der Beitrag 3 ways for sums over columns in Power Query erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
https://ssbi-blog.de/blog/technical-topics-english/3-ways-for-sums-over-columns-in-power-query/feed/ 3
How to handle custom errors in M gracefully https://ssbi-blog.de/blog/technical-topics-english/how-to-handle-custom-errors-in-m-gracefully/ https://ssbi-blog.de/blog/technical-topics-english/how-to-handle-custom-errors-in-m-gracefully/#comments Tue, 10 Dec 2019 14:06:49 +0000 http://ssbi-blog.de/?p=2257 I can recall that back then, in the good old Excel VBA days, at some point, I was very busy with error handling to make it easier for the user to understand the logic of my program. One article was particularly illuminating for me: Error handling via an error class, by Dick Kusleika. The languages […]

Der Beitrag How to handle custom errors in M gracefully erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
I can recall that back then, in the good old Excel VBA days, at some point, I was very busy with error handling to make it easier for the user to understand the logic of my program. One article was particularly illuminating for me: Error handling via an error class, by Dick Kusleika.
The languages M and VBA don’t have much in common, but also in M I want raise custom errors, when if something contradicts the business logic. Providing the user with a meaningful error message can greatly increase the acceptance of the program. This article deals with how you can gracefully handle custom errors in M.

Recommended ressources about standard error handling in M

Of course some M-enthusiasts in the community have already dealt with the topic of error handling and I recommend you to have a look at these as well:

Miguel Escobar highlights in the first of the mentioned posts, that you can raise your own (custom) error messages, if you want to.

What are custom errors and when are they useful?

Power Query/ M returns its own error messages when you try to evaluate an expression that M cannot evaluate. For example, the expression A + 1 returns the following error message.

Standard error message in M, Power Query, Power BI Desktop
Standard error message in M

But what if you want an error message to be returned on an expression that could be evaluated by M but doesn’t fit into your business logic?! In the following example, I want an error message to be returned whenever the tax percentage is not 7% or 19%.

Throwing a custom error in M, Power Query, Power BI Desktop
Throwing a custom error in M

The purpose of custom errors is to return error messages when the expression is technically evaluable but contradicts your own business logic. In this case, it is good and right to provide the user with the right information so that he can correct the error as quickly as possible. Let’s see how this can be done.

The manual version

You can throw a custom error by using the keyword ‚error‘ followed by the function Error.Record(). Within the function you can define up to 3 arguments, which are:

  • reason as text,
  • optional message as nullable text and
  • optional detail as any

The following screenshot shows how to manually generate this error message in a calculated column:

Throwing a custom error by writing the Error.Record manually, Power Query, Power BI Desktop
Throwing a custom error by writing the Error.Record manually

But what if I want to check for this (logical) error in many places in my queries? Do I then have to write this Error.Record manually each time, or copy it from one of my previous solutions? To avoid this manual process, I have written my own function that fixes this problem!

Using a custom error function for convenience

Whenever it makes sense, I outsource tasks to custom functions that I can then reuse. This reduces the risk of errors, makes my code clearer and easier to maintain. My goal in calling my custom error therefore looks like this:

Throwing a custom error by calling a custom function and the passed custom error ID, Power Query, Power BI Desktop
Throwing a custom error by calling a custom function and the passed custom error ID

So instead of manually rewriting (or copying) the Error.Record at any necessary point in my code, I want to define it centrally at one point and then call it (based on its ID) via a function from any of my queries. I’ll show you how to do this now.

The function fnReturnCustError()

I have commented on the functionality of my function directly in the source code and hope that this is sufficient to understand it.

I hope this was interesting for one or the other. I am sure that this method can be further developed. If you succeed, please let me know.

Greetings from Germany,
Lars

Der Beitrag How to handle custom errors in M gracefully erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
https://ssbi-blog.de/blog/technical-topics-english/how-to-handle-custom-errors-in-m-gracefully/feed/ 4
How does the filter context in DAX apply? https://ssbi-blog.de/blog/technical-topics-english/how-does-the-filter-context-in-dax-apply/ https://ssbi-blog.de/blog/technical-topics-english/how-does-the-filter-context-in-dax-apply/#comments Mon, 11 Nov 2019 16:33:26 +0000 https://ssbi-blog.de/?p=6150 If you deal with Time Intelligence functions (TI) in DAX, you quickly reach the point where you can no longer use the standard TI functions. Very quickly you reach the ingenious site (daxpatterns.com) of Marco and Alberto from SQLBI, where they among other things show patterns for custom TI functions. The current article uses the […]

Der Beitrag How does the filter context in DAX apply? erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
If you deal with Time Intelligence functions (TI) in DAX, you quickly reach the point where you can no longer use the standard TI functions. Very quickly you reach the ingenious site (daxpatterns.com) of Marco and Alberto from SQLBI, where they among other things show patterns for custom TI functions. The current article uses the TI pattern as an example to show how the filter context works, but this can generally be transferred to the functionality in DAX.

How the pattern looks like

Custom TI functions can be useful and necessary if the standard TI functions of DAX are no longer sufficient. The pattern from SQLBI looks as follows:

[SalesYTD] :=
CALCULATE (
    [Sales],
    FILTER (
        ALL ( ‚Date‘ ),
        ‚Date'[Year] = MAX ( ‚Date'[Year] ) &&
        ‚Date'[Date] <= MAX ( ‚Date'[Date] )
    )
)

https://www.daxpatterns.com/time-patterns/

What the pattern does

Let’s try to follow the pattern by using the (green highlighted) example value 3rd of January 2018:

The TI pattern and the existing filter context, DAX, Power BI, Power Pivot
The TI pattern and the existing filter context

The pattern does the following:

  1. The FILTER function is an iterator. It iterates over the complete and unfiltered Calendar table row by row, because ALL( ‚Calendar‘) is ignoring the existing filters of the filter context (the date 3rd of January 2018) on the Calendar table.
  2. Due to the existing row context created by the FILTER function, it iterates over each row in the Calendar table and compares two things:
    1. If the given ‚Year‘ (marked red) is equal to the maximum ‚Year‘ (marked green) respecting the current filter context (which is 3rd of January 2018) and
    2. if the given ‚Date‘ (marked red) is smaller than or equal to the maximum ‚Date‘ (marked green) of the current filter context (which again is 3rd of January 2018).
  3. At the end, when all filters are set, the measure [Sales] is evaluated.

Now please answer the following question: Why is MAX( 'Calendar'[Year] ) and MAX( 'Calendar'[Date] ) evaluated under the existing »filter context«? Take some time and see if you can find a satisfactory answer.

Why it works

I have been using this pattern for a very long time and simply accepted that it works without really asking myself why it works. There is this already very old but wonderful article by Jeffrey Wang (read my interview with him here) which describes how cross filtering works in DAX. He describes that there are only 3 ‚targets of filter context‘ in the DAX language:

  1. A table reference (like ‚Sales‘),
  2. VALUES( Table[Column] ) and
  3. DISTINCT( Table[Column] ).

Nothing else gets effected by the filter context. But how does that relate to the TI pattern of our Italian friends, as neither MAX( 'Calendar'[Year] ) nor MAX( 'Calendar'[Date] ) contain one of the 3 targets from above?

It’s all about syntax sugar

Since DAX was developed for business users, „the original design goal was to make syntax as simple as possible in common usage patterns while maintaining a coherent and semantically sound language model“ (quote Jeffrey Wang, from my interview with him). That’s why there is so much ’syntax sugar‘ in DAX, which means that a more complex DAX expression is hidden in a simpler one.

I think that most DAX users know that
=
CALCULATE ( [Total Sales], Product[Color] = „Red“ )

is internally converted to

=
CALCULATE (
[Total Sales],
    FILTER ( ALL ( Product[Color] ), Product[Color] = „Red“ ).
)

just to make the DAX entry point for beginners easier. The same applies to all aggregate functions that accept a column reference such as Table[Column] as the only argument, such as SUM( Table[Column] ) and also MAX( Table[Column] ). Jeffrey describes it in the above mentioned blog article as follows:

Note that DAX function Sum(T[C]) is just a shorthand for SumX(T, [C]), the same is true for other aggregation functions which take a single column reference as argument. Therefore the table in those aggregation functions is filtered by filter context.

Good bye syntax sugar

Armed with this knowledge, let’s have a look at the TI pattern without syntax sugar:

The TI pattern without syntax sugar, Power BI, DAX, Power Pivot
The TI pattern without syntax sugar

Since each MAX( Table[Column] ) expression is represented internally by MAXX( Table, Table[Column] ), the circle closes here: The first parameter of MAXX() is a table reference and this belongs to the 3 targets of the filter context. All this can of course be transferred to all other DAX expressions.

And now to be honest, was that the answer you would have given me to my question?! 😉

Cheers from Germany,

Lars

Der Beitrag How does the filter context in DAX apply? erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
https://ssbi-blog.de/blog/technical-topics-english/how-does-the-filter-context-in-dax-apply/feed/ 1
Table.Profile and its unknown second parameter https://ssbi-blog.de/blog/technical-topics-english/table-profile-and-its-unknown-second-parameter/ https://ssbi-blog.de/blog/technical-topics-english/table-profile-and-its-unknown-second-parameter/#comments Thu, 17 Oct 2019 15:07:25 +0000 https://ssbi-blog.de/?p=5999 A couple of  weeks ago I could finally publish my post about tables in M – how, when and why. Under „Other special table functions“ I mentioned the function Table.Profile() as a function to get meta information about tables. During my research I noticed that this function accepts a second optional parameter which is not […]

Der Beitrag Table.Profile and its unknown second parameter erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
A couple of  weeks ago I could finally publish my post about tables in M – how, when and why. Under „Other special table functions“ I mentioned the function Table.Profile() as a function to get meta information about tables. During my research I noticed that this function accepts a second optional parameter which is not documented anywhere. Also other posts about this function, which colleagues from the community had already written, did not mention this parameter (at least none I could find). This made me curious, so I reached out to the Power Query dev team.

Syntax of the function

The syntax of the function is as follows: Table.Profile(table as table, optional additionalAggregates as nullable list) as table. The first mandatory parameter is the table whose profile is to be created. The second parameter is a puzzle. Let’s take a closer look at this one.

Second parameter «additionalAggregates»

This argument is expected as a list of lists, like this: {{},{}}. Each of these inner lists represents a new calculated profile column and consists of three items:

  1. item: Output column name as text
  2. item: typecheck function as function
  3. item: aggregation function as function

Let’s take a detailed look.

Output column name

This is just the name of the new profile column, which of course should reflect the content of the column.

Typecheck function

The typecheck function is expected to be something like each Type.Is(_, type any). While the keyword „each“‚ refers to the current record in the profiling table, the underscore „_“ refers not – as you might expect – to the current record, but to the column of the source table, whose name can be found in the current record in the first column „Column“. So the underscore „_“ doesn’t return a record, but a list. This information will also be important for the third item of the parameter additionalAggregates: the aggregation function.

Aggregation function

If the second parameter – the typecheck function – returns true, then the aggregation function kicks in. Otherwise the output of the aggregation function will be null.

So how could examples of the aggregation function look like?

Aggregation function – example #1

= Table.Profile(Table.FromRows({{1, 2}, {3, null}}, type table [A=any, B=number]), {{"New profile column", each Type.Is(_, type number), each List.Average(_)}})

What happens in the profile table?

  • Column A: This column is of type any. My function is explicitly looking for type number (Type.Is(_, type number)), so the function List.Average() is not used for column A. The value null is returned accordingly.
  • Column B: This column is of type number. My function is explicitly looking for type number (Type.Is(_, type number)), so the function List.Average() can kick in. The value 2 is returned as the average of 2 and null is 2.

Aggregation function – example #2

= Table.Profile(Table.FromRows({{"ABC", 2}, {"C", 3}}, type table [A=text, B=any]), {{"AverageCharactersCount", each Type.Is(_, type text), each List.Average( List.Transform(_, each Text.Length(_))) }})

What happens in the profile table?

  • Column A: This column is of type text. My function is explicitly looking for type text (Type.Is_, type text), so the function List.Average( List.Transform(_, each Text.Length(_))) is used for column A. The returned value is 2 as the average text length of „ABC“ and „C“ is 2
  • Column B: This column is of type any. My function is explicitly looking for type text (Type.Is_, type text), so the function List.Average( List.Transform(_, each Text.Length(_))) doesn’t kick in, but returns null.

When using the UI to assert types

When you change a data type using the UI, the M function Table.TransformColumnTypes() is used behind the scenes to do that job. What you should always keep in mind when using this function is, that it always returns a nullable data type. This is something I forgot, even if it was a central message of this of my own recent posts. Why is this important in view of the last post? Take a look at the following examples:

In the first screenshot I use ascribing types and my custom column to Table.Profile() just works.

In the second screenshot I use the UI/ Table.TransformColumnTypes() to assert the type and my function returns null, where I expected the value 2. Why does that happen. As I said, in my example Table.TransformColumnTypes() returns a nullable text type and Type.Is(type nullable text, type text) = false. This is why I get back null.

To fix this behavior, I have to check for type nullable text, instead of type text:

The moment you do it right, it’s running. 🙂

Greetings from Germany,

Lars

Der Beitrag Table.Profile and its unknown second parameter erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
https://ssbi-blog.de/blog/technical-topics-english/table-profile-and-its-unknown-second-parameter/feed/ 1
Tables in Power Query – how, when and why https://ssbi-blog.de/blog/technical-topics-english/tables-in-power-query-how-when-and-why/ https://ssbi-blog.de/blog/technical-topics-english/tables-in-power-query-how-when-and-why/#comments Fri, 20 Sep 2019 12:48:18 +0000 https://ssbi-blog.de/?p=5288 The M language has so-called structured values, to which lists, records and tables belong. Each value type serves specific purposes and this post is intended to give an introduction to tables. This post is part of a series about lists, records and tables in M. What is a table in M? In M there are two […]

Der Beitrag Tables in Power Query – how, when and why erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
The M language has so-called structured values, to which lists, records and tables belong. Each value type serves specific purposes and this post is intended to give an introduction to tables. This post is part of a series about lists, records and tables in M.

What is a table in M?

In M there are two kinds of values: primitive and structured values. Examples for primitive values are:

  • „a“,
  • 1,
  • true.

They are primitive in that way, that they are not contructed out of other values. In contrast to primitive values, we have so called structured values in M, which are composed of other values, primitive and structured ones. A table is one of those structured values (the others are lists and records) and it is described as „a set of values organized into columns (which are identified by name), and rows.“

Even if tables usually contain columns and rows, they can be empty, which looks like this: #table({},{}).

An example of a very simple representative of a table is:

#table( {"column_1", "column_2"}, { {1, 2}, {3, 4} } )

Before taking a more detailed look at tables of what they are and what they are for, let’s discuss why to use tables at all.

Why use table at all?

Tables are often the final result of a query, which can then either be used as an intermediate query or loaded into the data model in Power BI and Power Pivot. In addition, there are functions that work with tables as input parameters and others that generate tables as return values. For these reasons it is necessary to know how to deal with them in order to use the M language safely.

How to create tables in M with native M functions

There is no literal syntax to create a table, like for records ([]) and lists ({}), but there are several native M functions, that create tables.

#table()

This function (it is spoken „pound table“) is something special, as it is not (yet) returned by the intrinsic variable #shared. Since the first parameter of the function can be used in different ways, there are several ways to use this function.

The second parameter is always a list of list, filled with values. Each inner list represents a row within the table. Let’s take a look how the first parameter can be used in different ways.

Version #1 – Specifying column names in a list

#table( {"A", "B"}, { {1, 2}, {3, 4} } ) →By specifying the column names as text values in a list, every column gets its specific column name.

Version #2 – Defining a number of columns

#table(5, {{ 1,2,3,4,5}} ) → If I need to create a known number of columns (e.g. 100), but they only need to have values and no specific column headings, I can use this syntax.

Version #3 – Using so called ascribed types to define column names and types

#table( type table [A = number, B = text], {{1,"one"}, {2,"two"}, {3,"three"}} ) → With this version you can define not only the column headings and the corresponding values, but also the data types of the individual columns.

I will describe why using ascribed types for defining types of values in a column of a table is a BAD IDEA later in a separat section of this post. Now let’s take a look at functions that create tables from other values in M.

Conversion functions: Table.From*()/ *.ToTable

Conversion functions create a table from other values, like lists, records etc. Here a few prominent representatives:

Other M functions that return tables

In addition to the conversion functions, there is a ton of other functions returning tables in M. First of all I want to mention those functions that allow you to define tables yourself by giving them lists built in either row or column logic:

In addition to these functions, there are many more that return tables. Most of them are connectors to external data sources. Examples of this are: Csv.Document(), DataLake.Files(), or Github.Tables().

Don’t use ascribed types for type definitions

Dealing with types in M is really tiresome. In several places I stumbled across problems in M whose solution was to be found in M’s type system. I had written under Version #3 that it is not recommended to ascribe types to the values of a column in a table. I wrote a blog post about it, which describes the reasons in detail and I recommend you read it, before going further in this post. In short Power Query accepts your type definition via ascribed types as correct and does no validation, wether the declared type of the column is compatible with the values in that column. Only when you try to load these values into another engines (Excel data model, the Power BI data model, or pure Excel) will you get error messages because these systems recognize the error.

Operators for tables + equivalent functions

There are 3 operators that can be used in conjunction with tables: „=“ and „<>“ make it possible to compare tables, while „&“ combines tables. Here are some examples of how to use that:

#table({"A","B"},{{1,2}}) = #table({"B","A"},{{2,1}})true. Tables are equal if all of the following are true:

  • The number of columns is the same.
  • Each column name in one tables is also present in the other table (regardless of its position in the table).
  • The number of rows is the same.
  • Each row has equal values in corresponding cells.

#table({"A","B"},{{1,2}}) = #table({"B","A"},{{2,1}, {3,4}})false, due to a different number of rows.

#table({"A","B"},{{1,2}}) <> #table({"B","A"},{{2,1}, {3,4}})true

#table({"A","B"}, {{1,2}}) & #table({"B","C"}, {{3,4}})#table({"A","B", "C"}, {{1,2, null}, {null, 3,4}}). This can also be achieved using the function Table.Combine({ #table({"A","B"}, {{1,2}}), #table({"B","C"}, {{3,4}}) }).

Accessing table elements

Once you have a table, it is often necessary to access specific items within the table directly.

Accessing a single column

TableName[ColumnName] → the result is a list. Therefore Value.Is(#table({"A","B"},{{1,2}})[A], type list) returns true. If you want to learn more about lists, read this post.

Accessing a single row

TableName{zero-based row index} → the result is a record. Therefore Value.Is(#table({"A","B"},{{1,2}, {3,4}}){1}, type record) returns true. If you want to learn more about records, read this post.

Accessing a single cell

Accessing a cell in a table brings accessing a row and accessing a column together:

TableName[ColumnName]{zero-based row index} which is equivalent to TableName{zero-based row index}[ColumnName], column and line references can therefore be swapped.

The result has no predictable type because it returns what is in the cell and that can be anything: a scalar value, a list, a record, another table, a function, etc.

Special functions to select elements in a table

If you don’t want to address only one column, one row, or a certain cell of a table, but need to select several columns or rows by condition, there are functions in M that help you. Here are a few examples:

Table.SelectRows() → Returns a table of rows from the table, that matches the selection condition.

Table.SelectColumns() → Returns a table with only specific columns that match the selection conditions. The third and optional parameter missingField of this function is very useful, which says what to do if the addressed field (column) does not exist.

  • MissingField.Error → Default: Return an error
  • MissingField.Ignore → Don’t select the column
  • MissingField.UseNull → Fill the column with null values

Table.ColumnsOfType() → This function is very useful, if you want to select columns of your table, which match a specified type or a list of types. That way you can for example select all columns that are of type text. Although this function can be very helpful, it does not always return the expected columns. I’ve written a post about this that deals with pitfalls with Table.ColumnsOfType.

Other special table functions

If you want to get meta information about the table you are using, the following two functions will help you:

Table.Schema() → This function returns information about the columns of the table, like:

  • the 0-based position of the column in table,
  • the name of the type of the column,
  • nullability,
  • maximal length,

Table.Profile() → Returns the following information for the columns in a table:

  • minimum,
  • maximum
  • average
  • standard deviation,
  • count
  • null count
  • distinct count.

Table.Buffer() → This is a special function, which is used, among other things, when it comes to query performance optimization, as it keeps a snapshot of the table in memory. There are several good blog posts out there, that cover this topic. See Imkes post about Table.Buffer inside List.Generate to improve performance.

Even though there is much more to say about tables and their capabilities, I hope this was a helpful introduction to the topic 🙂

Greetings from Germany,

Lars

Der Beitrag Tables in Power Query – how, when and why erschien zuerst auf THE SELF-SERVICE-BI BLOG.

]]>
https://ssbi-blog.de/blog/technical-topics-english/tables-in-power-query-how-when-and-why/feed/ 2