Agreement Life Cycle Best Practices From Acquisition to Divestiture
With Matt Kopycinski, Manager, GIS & Land Services at Pandell
Duration: 32mins, Released Feb 25, 2021
“…if you already start off on the wrong foot these documents can’t be processed in the appropriate manner for the future steps. Laying a good foundation is important with your documents…”
In this webinar, Pandell's own Matt Kopycinski provides an end-to-end process overview and discussion of best practices for energy asset A&D. He presents quality control measures that are key to reliable data integrity for utility, pipeline, renewable, mining, and O&G companies in Canada and the U.S.
About The Pandell Leadership Series
The Pandell Leadership Series is a collection of free webinars featuring presentations by energy industry experts in a variety of specialized fields. Topics range from global business issues to recommended best practices in oil and gas; pipelines; mining; utilities; and the renewable energy industry (including wind, solar, hydrogen, geothermal, marine & hydrokinetic, nuclear and biomass power).
Please Note: Views and opinions expressed by the PLS presenter(s) do not necessarily represent the views of Pandell and its representatives.
ELIZA WITH PANDELL Welcome to the fifth event in our Pandell Leadership Series. My name is Eliza and I’m the Customer Engagement Specialist here at Pandell. Today our presentation is all about Agreement Life Cycle Best Practices from Acquisition to Divestiture.
So, now I’d like to talk a bit about our speaker who is our very own Matt, Manager of GIS & Land Record Services. So, Matt leads Pandell’s Land and GIS Services Division. This supports companies in all energy sectors to migrate from paper land files to digital asset tracking and mapping. Matt’s expertise includes both analysis and implementation of data QA standards. He’s led dozens of digitization and mapping projects that have converted more than 500,000 land agreements over the last six years. So, needless to say Matt is a bit of an expert in his field. So, we are very thrilled to have Matt speak with us today and I will pass it over to you now Matt.
MATT WITH PANDELL Thanks Eliza. That’s a better introduction than I would have gave myself. You might have buttered my bread a little bit too hard there.
Welcome everybody. I’m very happy to be able to present for you guys today and let you in on a few things that we here at the services department at Pandell use on a day-to-day basis to accomplish the goals that we’re going to be talking about today.
I want to note that today we are going to focus a bunch on quality control. To me that’s really kind of the high-level things that we see go wrong in our normal conversions or people that ask us to come in for help that they’ve use a third party for conversion. So, a lot of the things we’re going to talk about today are really based around that quality control and quality assurance.
We’re going to be talking about that agreement life cycle, like we talked about – like we mentioned before, in regard to an acquisition on a linear asset but this would apply to all industries including ownership and depth for say an ENP or coal company. The idea of starting a new project from scratch or build a new line or extension when we acquire new parcels that would look a little bit different than this, but the principles of the quality control and data assurance would be the same. So, while some of the technologies will be shared between these two ideas the workflows would be a little bit different. So, just giving you a forefront of what industries we are looking at today.
I’ve got to be unbiased today in regard to the Pandell technologies and all my talking points will be from a third-party perspective or at least try to be. However, our technologies help you perform all the steps that we’re going to be talking about. Faster, more accurately, and with total confidence so that you can rely on that data to make business decisions. So, if you’re a current Pandell customer you probably already know that. And if you’re not a current Pandell customer, why not? We can definitely help you improve your processes, efficiencies, and reliability of your data.
So, I’ve got to confess that some of the examples that you’ll see today are of existing or are soon to be client data. So, all the references have been removed so if you see some of your own data in here I’m sorry you had to play guinea pig for me but thanks in advance.
When acquiring a new project. And when I say acquiring a new project right, something that’s already been an existing asset for a divesting company. The divesting party might already have a land management system in place and a conversion into your own land system, if you have one, can be performed. However, your trust in that human did that stuff right the first time and due diligence is still needed to ensure accuracy of the files. And the reason for that is most of the time they look like this. Right, it’s just a box of papers and this is no rhyme or reason on how they stored it. They’re either by agreement there, or by parcel, or by asset. The idea of trusting the organization of the divesting party gets questionable. Especially when that box of files probably came from somewhere that looks like this. It’s just a storage center with all our paper files stored in one place. And again you’re trusting that a human did this portion right this first time too. That they actually gave you everything that you needed for the divestment. All the supporting documents that their legal team actually identified the land rights that are included there for the bill of sale.
As a first step in the quality control of the due diligence process the undertaking of a scanning project to ensure all of the documents that you get have been digitized with a high-quality OCR is imperative right. This could be a multi-million, dollar acquisition and scanning is cheap right so, let’s make sure we got everything at least in a digital format that they contain in paper. Even if they think they’ve already done it 100 percent and they give you scans that are digitized already, again I’m not trusting that a human did it right the first time and I like to undertake a scanning project again.
Now even after a high-quality scanning project like this has been done a folder or box would look something like this. It’s just a ton of different PDFs that have been force fed through a scanner and created into this digital file. And this example that we’re looking at today we actually ended up with 680 pages, across fifty-five PDFs and four separate deliveries from the scanning company. That’s where this first part of this data massaging, if you will, will come into play. Here at Pandell we call it document separation. It’s where we break out the original folder level PDF or file level PDF and the individual documents based on a set of rules. So, you can see some of these original files that are high level PDFs like this, they’re large. They contain 172 pages in this first example that we are looking at. So, being able to sift through this 172-page scan to pull out the appropriate information that would be applicable to your land system or your spreadsheet tracking can become pretty difficult. You have orientation issues; you’ve got order of pages issues – if something gets stuck in the scanner and they start over again and even though they gave you all the data it might not be in the prettiest format that you could be able to use it or the most useable format right.
What we usually do from there on that document separation is be able to place them into some smart naming conventions along with the file. We usually keep the original file number that’s been supplied here for cross-referencing back to the divesting party. If I have to call them back and ask them about specific questions surrounding a certain file at least I want to be able to talk in their language, even if that language doesn’t mean anything to me. Right, so that original base number that you see there, that 021102 is something that we would be able to try to keep and maintain until we’re certain that we’ve gotten everything from the divesting party.
And we’ve also added a couple of other smart things here like what type of document it is. Like if it’s an easement, the state and county it’s from, and then we use a set of suffixing items here to be able to determine what’s going on inside the folder at a high-level. And this really lends itself well to the land management system and what this does is – you can see the very first two files here and the suffixing of 00A and 00B determine that these are primary documents. Primary documents to Pandell are going to be things that divest the land right, things that create an agreement for us to be able to do what we do on the land. And then there’s a bunch of supporting documentation in here right. We’ve got maps, we’ve got financials, correspondence, supporting court documents and we have a way to be able to associate these secondaries via this suffixing. The zeros and the A’s to say all of these secondary documents here go back to primary dot zero, zero A. And most of the time there isn’t a ton of information that we need to pull out of this – and this is really the meat and potatoes and what the land management system or what our spreadsheet tracking should be containing but these are still really important to be able to complete the entire picture of what’s going on with what we got and the history behind it.
Lots of times people put everything into a spreadsheet or by doc in one level of file explorer and it could be hard to tell what goes on with that. In this example, we’ve got a ton of documents with just one level in our file explorer – and I understand the idea behind this is to be able to contain everything that we could possibly search on. So, if I’m searching I’m just looking in one file set and I’m getting back everything that I need. However, this really isn’t scalable right? If I’ve got an acquisition that’s covering you know 10,000 agreements, or if we bought the entirety of a company and there’s larger files sets than this. One level of hierarchy in your PDFs really doesn’t make sense at that level. So, even in this case where we’ve got this unique identifier of 001, which maybe this is by facility or by parcel, we’ve got some unrecorded leases and some ratifications, access road easements, other assignments of the lease, whatever might be there. If you have something like this it would be nice to even have one folder where this could be contained in and that structure be split apart a little bit better.
Quality control with document separation and being able to prepare these documents where they are named appropriately for your land management system or spreadsheet can be especially difficult when you got so many hands in the cookie jar right. And especially difficult on top of that if you’re using file explorer because there are so many things that can go wrong because it just accepts so many things in the way of file naming. There’s just so many different variations that people might use to be able to try to say the same thing. This is usually at Pandell where we use the service to be able to keep everyone in line. And when I talk about the service, what this is-is a code written in microsoft.net framework using C Sharp language. What it does, is it takes the file folders, and it follows the document separation rules that the humans were supposed to be using and it watches people separate. And nightly, I have it programed to be able to spit out a report that shows any errors or deviations to the rules because a humans going to have the opportunity to make mistakes, but the machine only knows how to do it one way right. This report that I spit out can show me a bunch of things about my team and it shows me all the errors number one so that way the team can fix them, and learn from their mistakes, and make themselves more accurate, and it also allows me to monitor team performance. Like how many pages have they separated in certain days, the number of files they are looking at, total error counts across the way. I have it split up into different tabs here that allow me to see where they have made prefixing issues, maybe they use the wrong separator as far as period or underscore, maybe they entered an invalid county or state that doesn’t match with either the area in which we are looking for, so there’s only valid answers. It’s really easy to hit MM instead of MN when you are just trying to enter in a state abbreviation right. Humans make mistakes that’s why they put erasers on pencils, so we use the machine to be able to keep us in line. When you’re trying to process this many docs it gets really careless for somebody – you know taking a look at, we got people in here looking at 5000 of pages, that’s a lot. Being able to keep these original documents clean and concise is really going to lend itself well to the later workflows that we are going to talk about. But if you already start off on the wrong foot these documents can’t be processed in the appropriate manner for the future steps. Laying a good foundation is important with your documents.
So, now that we’ve got all these documents separated right. We’ve been through everything we’ve looked at it. We can run a database script against those documents to be able to add them to the land management system all at one time rather than taking the pain-staking process of adding them one-by-one. Or you can, do the same thing with a spreadsheet if you got just an executable file to be able to look at that drive and place appropriate primary documents into the spreadsheet. This process, in general it first analyzes the directory structure and the separation process that we use. And using that standard conventions of naming it parses those attributes out of the naming convention of the file and loads them into the land system or the spreadsheet. In this process not a lot of information is pulled from the agreements but at least now we have a database or a spreadsheet with agreement records that we can start abstracting from. And in this step all of the attribute information for the agreements themselves gets populated to a set of data entry standards. Those data entry standards are going to work the same way that your separation standards went. It’s just being able to make sure every team member populates the database fields in the same way, so that way the database stays concise and congruent. Like why to track, what are the rights held, how do we determine that, et cetera. We populate the agreements first in the spreadsheet or in the land management system and then we’ll go through and make decisions about superseding agreements, duplicate agreements, assignments and how they’re effecting those agreements, that type of thing. But once we have them all set up in a complete and searchable data set – you can’t really tell on a flat file like a file explorer would provide for you all the attributed information that applies to it. So, even at a high-level here when we’re looking at some stuff in either through a spreadsheet or database format, you know we can identify some high-level stuff like these two right-of ways we have here from the Conrad’s they’re both from the same date. One’s called the pole line easement the others just called and easement. That’s at high-level is something we could identify as a duplicate right. Things that we’re able to search on in the database or the spreadsheet with allow us to identify that at a high-level before we get too far down the road.
And hopefully your database or your land management system has a set of editable code tables so that you can enter data consistently and uniformly. To get rid of punctuation, abbreviation, case sensitivity, all that kind of stuff – in this example here we’ve got a ton of different ways to be able to enter right-of way or easement type language. And when you’ve got a bunch of people working on a due diligence project having this type of quality control is difficult when there’s so many people that put their own flavor on how stuff is supposed to look. Or perhaps their taking it verbatim from the document. That’s what that code table or even a pick list in a spreadsheet would allow you to be able to prevent. Or a formatted field, you know if it’s a date field in a spreadsheet that’s looking for only a date, you’re not going to be able to enter in easement in that area.
One thing that’s become a topic over the recent years has been more about OCR lending itself to machine learning applications, and data population. And I want to talk about that for a minute. We definitely use it here at Pandell where it can be useful and save time. And it’s extremely accurate on new agreements. We’ve got a beautiful document here all the information is just typed up in it, all the stuff is in the same form. Perhaps if it’s just one project and the same form was used over, and over, and over again for a single landowner it works really well there right because machine learning can pick out those types of information. For older due diligence projects it still takes some human interaction. Especially where you got a blend of text and print, and writing and cursive, and that type of stuff and pictures. So, I want to take a look right now at one that where it’s not so successful. Where OCR has a difficult time with it. And both of the examples that we’re going to look at actually came from the same document in 1920.
The document here, taking a look at the legal description or the legal narrative, some people might call it. Is to be able to take this info out of here if you have something more than just the standard formatted legal metes and bounds description, or anything like that.
When we went ahead and copied this info out of our high-quality OCR, PDF and we went ahead and pasted that into our land management system or spreadsheet. This is what it came across as. So, its’ fairly close right, there’s some things that it freaked out about. Misspelled township and that kind of stuff but we’re not reinventing the wheel of trying freehand type in all this information that might be applicable in here. It can get us a good base to be able to just make the adjustments and corrections from.
This can speed up your data entry and efficiencies quite a bit depending on how high quality the document is. However, again with that human interaction I’d like to take a look at the recording stamp of this exact same document. It looks something like this. For everybody on the call, it’s all land admin professionals, you see this kind of stuff all the time. Like we talked about, this is an older document from the twenties and when we try to take this information and copy it out of here into our land management or spreadsheet this is what we came back with. There’s times when OCR can help, and there’s times when machine learning can help, and there’s times when it doesn’t work. When it can be more hurtful than it can be good. When you’ve got cursive and print, and things like that it’s gets a little bit difficult to do so. Especially if you’ve got old information that might be applicable to the divesting party like this right-of way 53.5 or any other type of filing naming that they might have in here and cross-referencing information. These portions of the document weren’t even OCRed at all and it’s tough for that OCR to be able to pick that kind of stuff up.
Once we’ve got all the agreement information out of the documents, we’ve populated our database using the information that we could pull from OCR, with machines and humans keeping everything in line, we always do another round of a human checking everyone’s work. Similar to the old pass your paper to the person behind you in middle school. Because just taking a look at that data for a second set of eyes is always important, just because we were talking about how humans have a different influence, in how I determine the way that a provision is read. Or the way an expiration is populated from years ago. It takes another set of human eyes to be able to make sure you have confidence in that data.
So, after the human review is done we start another level of automated QA, QC on the abstracted data. And we call that data conditioning. The data conditioning that we follow is a number of sequel database queries that looks for anomalies in the data that don’t jive well with the data model or meet data entry standard rules. In the same way that we we’re doing it for our document separation we’re doing the same thing for our agreement abstraction information. Such as we’re looking here making sure that if we have a group code of a certain deed that the agreement type would be deed. In the same way if it was an easement, that it’s got to be some type of an easement type document. We validate all kinds of other stuff, where perhaps you’ve got a blanket flag where you got blanket easements across the entirety of this project. If you combine that with not having a tracked width for the portions of crossing that parcel. To us that would be a probable error. It doesn’t mean that its truly an error but it’s something you want a human to check it again over to make sure that we pulled out the appropriate information because if the blanket flag like on this one if it truly says yes it’s a blanket and we have a width that could be an error. There is no reason we would have a defined width of a corridor if the entirety of the parcel is a blanket easement. Or if we’ve got stuff where the agreement date is the current date or later? I doubt it. Some of this stuff that we’re getting from an already existing asset there’s no way that the agreement effective date would be in the future. Especially if it’s older documentation stuff that we have.
We’re making sure that divisions within the company are matching up with the geographical locations of the agreement. That kind of stuff. Stuff that you can look at a high-level across the entirety of the database and identify some of those things that could be issues. So, almost every single piece of the data lends itself to this query in some form of validation. I often call them tattle-tail queries because you just look for if, then type statements. And it’s really easy for a human to like to make a mistake for an agreement type of a wind lease and the rights held is probably not water right. They just went in there to pick wind as right held and they tabbed out of the cell too soon or they picked the first W that came up in the list. This could be performed on spreadsheets as well. There’s plenty of sequel plugins for spreadsheets that can lend themselves to these types of validations.
So now that we feel we’ve got a solid set of agreements we can move on to GIS mapping. Mapping provides other levels of detail about that agreement that would be extremely difficult to infer without the spatial reference of the map. It allows us to see overlapping agreements, missing gaps in our acquired line, possible title issues with the fee property and the list goes on and on right.
In this example, we use manual mapping due to the high efficiency, or the high accuracy and detailed provided by it. There are automapping programs, and polygon creation software out there that use formatted legal descriptions to create polygons out of the agreement records. While that can be a quick and easy way to have a least some spatial for the agreements I wanted to show you where having a solid GIS team makes a difference.
There is an old industry joke of GIS stands for "Get it surveyed" but a quality first step in accurate mapping can expedite the process to cure any deficiencies in the data that you acquire.
Speaking of accuracy, let’s take a look at some of the things that mapping was able to identify. First, mapping was able to create a facility line for the asset we were acquiring based off of the as-built drawings provided by the divesting parties engineering department and the stationing information applied to it. And we overlayed that with the tax parcel layer that we acquired from the county. You can see these lighter grey lines that are laid in here and we use that stationing information to match it up with the county’s.
And then we mapped the legal descriptions of the agreements against the tax parcel layer. So, out of the 54 agreements that we got in this one that we identified in the separation and abstraction processes, mapping showed that actually only twenty-two of them fell on the line we were acquiring and a bunch of the other stuff that we got was all structure. So again that’s going back to we’re making sure a human did it right the first time from the divesting company that they actually gave us what we were looking to acquire and not stuff that are all structure.
Mapping was also able to identify agreement and easement in right here, this yellow portion that overlaps a deed. So, we already own the parcel in fee simple title. Probably missing an easement release or maybe we still need to release it, but again that goes back to making sure we got what we needed from the divesting party.
They were also able to identify a curative issue where the last chain of title deed that we had for this parcel was a deed out from 1974 even though the tax parcel layer from the county showed that we currently owned it today or the divesting party currently owned it today. So, probably an issue there where we need to be able to cure that chain of title. And also mapping was able to identify a few gaps in the line where it looks like we don’t have any agreements that actually apply to the asset in which we were supposed to be acquiring.
After human QC of every agreement polygon is done we also run a couple of automated QC processes to ensure that the polygons are in the right location as far as the formatted legal description against the polygon. That they don’t contain any geometry issues such as bowties or too many vertices, and that the mapped acres of the polygon are actually in a strict variance with the called acres on the agreement. If the called acres on the agreement was for 20 acres and we ended up mapping 200 acres that’s probably not an accurate polygon and we need to be able to go back and review that. Usually that variance is either five or ten percent of the called area.
After we’re able to sort through all the different stuff that mapping is able to identify. We went back and forth with the divesting party to cure all of our defects, perhaps we had to run more title through the county. These agreements enter into maintenance mode.
This is where we are making the payments on the agreements, we’re completing our obligations, and abiding by the provisions of the agreement. We can easily run an obligations due report out of our land management system or have a stored query against our spreadsheet to make sure that our time sensitive stuff like renewals, extensions, and scheduled maintenance are actually happening on the acquired asset.
We might be able to run a payment report. Same thing on a stored query to be able to identify what’s coming due and what we actually paid in the past and payment history here. We would be managing suspended payees, updates to ownership, escheated funds and we could probably have an hour-long discussion just about this portion of the life cycle alone so understand that I’m hitting at a very high level.
We can also digitize some of our provisions that we need to abide by in the map as well so we can be able to see stuff where we’ve got access restrictions. Like maybe we’re only supposed to use a certain gate, or we’re supposed to be able to only access the property at certain times of the day. We could also take a look at places where we’ve got multi line rights if we want to be able to add extensions or build more lines within an existing corridor. Or places where we’ve got vegetation management restrictions, where we can or can’t use round up in the right-of way or whatever it might be. Don’t cut down the line of cherry trees that abut the property, whatever they might be. The list goes on and on for those provisions that don’t necessarily have dates associated with them for us to maintain but they are things that we need to be able to maintain to keep our end of the bargain right and that we are not in breach of contract.
Perhaps after several years, we decide to sell this asset off. The divestiture process after we’ve already done all of our due diligence steps from the beginning and kept up a high-quality amount of maintenance mode should be fairly simple now. Unlike it was the first time around. So, we can search by agreements located on the facility in our land management system or use our GIS maps like you’re seeing here.
We can check to see if there’s any restrictions on assignment right to see if there is anything here on our map that the supporting agreement say hey, look there’s some restrictions on assignment here that we need to be able to analyze.
This list can easily be exported to excel from the GIS system so that way your legal team or if you are already in the spreadsheet form of maintenance, you would be able to hand over those type of spreadsheets for the legal team to be able to run through their portions of the process. Maybe you’re maintaining sheets by facility or hopefully they are.
Once we have an assignment complete and the sale is done, and we’ve got the list of agreements that it applies to here. We could be able to mass add that assignment to all of the agreements in our land management system or at least be able to place them in every folder for the agreements in which they apply to in our file explorer. And we can mass inactivate those agreements as well in our land management system so, that way we can keep them archived for future reference, but they won’t display in our active maps or searches to clutter our data any.
Then we’d be looking to move on to our next acquisition.
That’s the entirety of my presentation today guys. I hope there’s some good questions for a Q&A session after this and thanks for attending and playing along with me.
ELIZA WITH PANDELL Thanks Matt. That was some great information. We’ll jump into some questions. So, what type of efficiencies or lack thereof have you seen from implementing so many QA, QC levels and a – because it seems like a bit of a time-consuming task.
MATT WITH PANDELL Yeah absolutely. So, a lot of the QC places that we’ve put in our controls there have actually allowed us to speed up quite a bit. I’ve been in place managing the services department here going on seven years and the amount of efficiencies that we’ve gained out of the processes I’ve outlined today have been astronomical. From the time that I started to now, we are probably about 700 percent faster, and more efficient than we were. Doing the old person-by-person paper-by-paper siloed thing really didn’t work. Unifying the team under a set of standard practices that we could make sure that are meeting standards all across the board, really allowed everybody to bounce questions off each other, get faster, and implement all of those ideas that we bred throughout that process to make sure that everybody was as efficient and fast as possible. I really feel like we’ve still got room to improve and we’re always trying to improve but we are a whole lot better now than we were then. So, the amount of time that we’ve spent in trying to develop these QA and QC practices have had a good effect in the long run while they took time to do at some point. Yes, the management team has to be involved and being able to make sure you can get this kind of stuff implemented. And that training is there so these guys know what they’re doing. But the end result is great efficiency gains.
ELIZA WITH PANDELL Great. We got another question for you. Regarding the reporting features of the database can you give us more details on which additional reports can be pulled, assignment, obligations, removal, et cetera? And also, can you create ad hoc reports?
MATT WITH PANDELL Yes absolutely. So, you can definitely create ad hoc reports for anything that you want to pull out of there. The nice part about our land management system, I’m only going to speak from ours is, the ability to search for what you are looking for and see that search result in your land management system really can get rid of a bunch of reports right off the bat. Every field in our land management system lends itself to the query. And if you’re looking for, like show me all of my held by production leases within a certain county, that’s a ten second search that provides you a list of agreements and track numbers immediately that you can provide to management. Yes, there are more in-depth ad hoc reporting that you can do but the power of our searchability really allows you to do a lot of those older reports that would take a lot more time, efficiently and you can get it done in a matter of seconds and stuff that used to take days.
ELIZA WITH PANDELL This next question is kind of interesting and obviously you’ll only be able to speak to it from your perspective and your area of the company, but one person is asking what Pandell, what our 2021 goals to improve, and grow are and what our focus is?
MATT WITH PANDELL From our perspective what we’re trying to do here is, we’re constantly pushing those efficiencies that we were talking about. We have a highly, competitive group here at Pandell. These guys don’t get enough credit from me, or anybody else on how hard they work and the hours that they put in. So, now we’re really trying to just make sure that we’re doing things as smart as possible. We’re a technology company and we want to make sure we’re using all the technology at our fingertips to make sure that our services goals are met in a timely manner. So, it’s the old smarter, not harder type idea. We know our guys are working hard and we want to make sure that we can put in those efficiencies gains at a high-level using technology rather than trying to isolate people that are doing better or worse. And so for our group its mostly more of the same but better. That’s really what we’re looking to do.
ELIZA WITH PANDELL Always improve, right. That’s absolutely one of our mottos here at Pandell. So, that applies across the board. Okay, another quick question for you about mass updating capabilities. Does your database have a mass update feature for most fields?
MATT WITH PANDELL Absolutely. You can mass update documents, owners, facilities, there’s quite a few options that you can do your mass updating. And it really speeds things up just like we’re talking about being able to create all those agreements at one time. Applying that assignment to 55 agreements would be almost a full day’s task for some lease analysts who go through there and add one of the exact same document to 55 different agreements. But you can do that in about 30 seconds using our mass change options there. So, there’s a lot of power in mass change and people can make a huge mistake with one fail swoop. So, there’s definitely some limitations on who you want doing that within your enterprise but yes we do have several mass change tools that allow you to do it better.
ELIZA WITH PANDELL Excellent, okay and the last question. How many personnel do you have internally following these practices that you’ve discussed and is it scalable?
MATT WITH PANDELL Yeah, absolutely. So, right now we’ve got 21 staff here in the internal services team. We’ve had up to 57. We’ve definitely got a large group here before and it definitely is scalable. I mean everything we were looking at was looking group wide, and it didn’t matter if there was 20, or if there was 50, or if there’s 100. All of those quality control items that we were trying to use are machine learning looking system wide. They don’t care if it’s Eliza doing it, or Matt’s doing it, or anybody on this call would be doing it, it only knows how to do things one way. And anything out of their hands is stuff it pushes back to you. Yes, it’s definitely scalable. You could be using this on 100,000 agreements with 100 people working on it at one time.
ELIZA WITH PANDELL Fantastic, okay. So glad that you could speak to us today Matt. That was really informative and so glad that everyone could join us. And we hope that everyone who’s here today and more can join us for the next event in the next few weeks even.
MATT WITH PANDELL Thanks guys.
ELIZA WITH PANDELL Bye.