Quantcast
Channel: Microsoft Dynamics 365 Community
Viewing all 64797 articles
Browse latest View live

Update Dynamics GP Notes from an eConnect Integration

$
0
0
By Steve Endow

I would have thought that one of the many eConnect integrations I have developed over the years would have required me to update an existing Dynamics GP Note record.  Perhaps I have done it before, but when this seemingly simple requirement came up recently, I couldn't remember having done it before, and Google didn't seem to turn up any results.

If there is an easier way to do this, please post a comment below, as I'm interested in knowing if I missed something obvious.

Let's say that  your .NET eConnect integration needs to update the address and phone number of an existing customer.  No problem.  But when you update the customer contact information, you also need to append an update message to the customer's Note field.  Okay, makes sense.

But if you try and set the eConnect taUpdateCreateCustomerRcd NOTETEXT field, that new value will wipe out any existing customer Note.  After poking around the documentation and searching for options to update an existing GP note programmatically, I didn't see any options.  I searched through the standard zDP stored procedures, as well as the eConnect "ta" stored procedures, but didn't see any options there either to update Notes.  While the SY03900 note table is pretty simple, it would be nice to have a simpler option for updating notes.

You could wrap all of this up into a single SQL stored procedure.  I didn't do that because I didn't want to deploy yet another custom stored procedure for this particular client.

You could also package this up as an eConnect Post procedure, but I find those to be a bit of a hassle because it's just like having a custom stored proc, which I didn't want, and the eConnect Post stored procedures are wiped out by most GP service packs and upgrades, so it is very easy to forget to recreate them after an update.

Anywho...

First, you need to make sure that the customer has a record in the SY03900 table.  Even though new customer records are automatically assigned a NOTEINDX value, by default, a record is not automatically created in SY03900.  So if you update customer ACME001 and attempt to update its note record in SY03900, your update will fail if the record doesn't yet exist.

I quickly pulled together this SQL to check for a Note record and create one if one didn't already exist.

IFNOTEXISTS(SELECTTOP 1 NOTEINDXFROMSY03900WHERENOTEINDX=(SELECTNOTEINDXFROMRM00101WHERECUSTNMBR='ACME001'))
BEGIN
INSERTINTOSY03900(NOTEINDX,DATE1,TIME1,TXTFIELD)
VALUES  ((SELECTNOTEINDXFROMRM00101WHERECUSTNMBR='ACME001'),CONVERT(VARCHAR(10),GETDATE(), 101),CONVERT(VARCHAR(12),GETDATE(), 108),'')
END



I then borrowed and repurposed some nice SQL from the very smart Tim Wappat, who has posted a nice script for fixing line breaks in GP note records.

UPDATESY03900SETTXTFIELD=CAST(TXTFIELDASvarchar(MAX))+CHAR(13)+'Text to append'
WHERENOTEINDX=(SELECTNOTEINDXFROMRM00101WHERECUSTNMBR='ACME001')



This update script adds a new line to the note and appends the new text to the bottom of the note.

With this script setup, after you use eConnect to update the existing customer contact info in GP, you have to have a separate method to perform the Note update.

Obviously this sample is only for updating customer notes, but you could repurpose for vendor and other notes, and if you got really fancy, you could probably abstract it to update notes for any record type.


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter




Apparent bug in eConnect taSopSerial stored procedure for invoice with negative quantities

$
0
0
By Steve Endow

This is an obscure one, but I thought I would document it for the Google-verse.

I developed an eConnect SOP Invoice import for a GP 2010 customer that sells and services lots of serialized items.  When they visit a customer, they may find that an item needs service, but the customer needs a loaner item in the meantime.

To record this transaction, they will create an invoice with the loaner item with quantity 1, and then have a second line for the serialized item being brought back to the shop for repair with a quantity of -1.  It is a little unusual, but if you have an invoice with a serialized item with quantity -1, it allows you to bring that serialized item into inventory.  

This process works well for invoices entered directly into Dynamics GP.  But if you try and import an invoice with a serialized item with a negative quantity, you'll get this error.


"taSopSerial Error - Serial Number does not exist in Item Serial Number Master -IV00200"

Notice that the Quantity node is -1.  The customer is trying to receive the serial number into inventory, but eConnect isn't allowing it.

If you review the SQL script for the eConnect taSopSerial stored procedure, you'll see this tidbit somewhere around line 215-230, depending on the formatting.

      if ( @I_vSOPTYPE = 3 
         and @I_vQUANTITY = -1 ) 
      begin 
          select @I_vQUANTITY = 1 
      end 

For some reason, the taSopSerial procedure is converting quantities of -1 to 1 for invoices.  The eConnect procedures don't include any comments, so it isn't clear why this is occurring in the 1800+ line procedure.

A few lines later in the procedure, there is validation for quantities of -1 and 1, so this would seem to indicate that the above lines aren't valid.  This code is explicitly acknowledging that an invoice item can have a quantity of -1.

        if ( @I_vQUANTITY not in ( -1, 1 ) ) 
        or ( @I_vQUANTITY = -1 
             and @I_vSOPTYPE <> 3 ) 

And only changing the value of lines with quantity of -1 seems pretty odd (why not all negative quantities?), and it just happens to affect my customer's import of invoices with serialized items that have a -1 quantity.

Since the GP client allows the entry of -1 for serialized items on an invoice, it isn't clear why eConnect would not, and whether this is a bug, or if there is a specific reason why eConnect would not allow a -1 line item quantity.

To work around this, we commented out those 5 lines, and that resolved the issue.  The invoices with -1 quantities for the serialized items import just fine, and after using the modified taSopSerial procedure for over a year, the client hasn't had any issues.

Except when they update or upgrade GP.  They recently applied a service pack for the 2014 year end updates, and that update apparently dropped and recreated the taSopSerial procedure.  So whenever they do an update, we have to remember why the error is occurring again, and then modify the new taSopSerial procedure.

I verified that this "problem" code also exists in GP 2013.  I have not yet checked GP 2015.


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter



Limitation with Dynamics GP VS Tools Event Handlers?

$
0
0
By Steve Endow

Visual Studio Tools Event Handlers are a great feature that allows you to respond to events and actions in Dynamics GP.  Let's say that you want to validate data after a user selects a value from a GP drop down list, or you want to run some code when the user selects a customer ID in a window.

Or, as in my current project, let's say that you want to suppress or click past some Dynamics GP dialog boxes!

The Void Historical Payables Transaction window was modified in Dynamics GP 2013 to add a Vendor ID filter.  This is a huge improvement for customers that actually need to use this window--you can now filter on Vendor ID and Document Number to quickly find the document to be voided.


But another change made in the Void Historical Payables Window in GP 2013 is that if you use the "Mark All" button, two very annoying dialog boxes appear.



I understand that these dialogs have a purpose, but for my current project of automating a PM void, they are getting in the way.

So, using a handy-dandy VS Tools event handler, I'm able to suppress the dialog boxes.

Here I am registering the event handler:

pmVoidPaymentsWindow.BeforeModalDialog += newEventHandlerBeforeModalDialogEventArgs>(PmVoidPayments_BeforeModalDialog);


And here is the code that will fire when the event occurs:

voidPmVoidPayments_BeforeModalDialog(object sender, Microsoft.Dexterity.Bridge.BeforeModalDialogEventArgse)
{
    if (e.Message.StartsWith("Your selection may include payments that have been reconciled"))
    {
        e.Response = Microsoft.Dexterity.Bridge.DialogResponse.Button2;
    }
    elseif (e.Message.StartsWith("Your selection may include credit card payments with related invoices"))
    {
        e.Response = Microsoft.Dexterity.Bridge.DialogResponse.Button2;
    }

}


This Event Handler code works great.  If I click on the Mark All button on the GP window, I don't even see the dialog boxes--they are suppressed by the VS Tools code.  And if I have some code in the GpAddIn.cs class that clicks the Mark All button programmatically, it also works.

But.......if I have code outside of the GpAddIn.cs file that clicks the Mark All button, the event handler doesn't fire.


In the screen shot above, I have a separate VS Tools form (FrmVoidTest) with a Mark All button.

The button simply calls pmVoidPaymentsWindow.MarkAll.RunValidate(), just like my test code in GpAddIn.cs.  But for some reason, when that method is called from the .NET form, the BeforeModalDialog event handler does not fire, and the two dialog boxes are displayed.

So based on my testing, there appears to be some limitation of VS Tools that requires any code that interacts with the GP form be called from within the GpAddIn.cs file in order to trigger event handlers.  Even if my test form calls the exact same method in the GpAddIn.cs file that clicks the Mark All button, the event handler is not triggered.

This seems really odd, but I've tried 3 or 4 different approaches and the event handler code just won't fire if I click the Mark All button outside of GpAddIn.cs.  I'm still researching it and asking a colleague to review my code to see if I'm missing something.

If anyone has any ideas or clever suggestions, I'm all ears!


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter

Server field is blank when you launch Dynamics GP due to space at beginning of DSN

$
0
0
By Steve Endow

I just had a call with a client whose Dynamics GP Server field was always blank when they launched GP.



The drop down list had entries, and the user could log into GP, but every time they launched GP, they had to re-select the Server value.


I checked the Dex.ini file and saw that it did have a SQLLastDataSource value.

SQLLastDataSource= Dynamics GP 2013

I noticed that there was a space at the between the equal sign and the name, so I removed the space, saved the Dex.ini and relaunched GP, but that didn't resolve the issue.

I then found this KB article, but it discusses a completely blank Server drop down list--not a Server field value that remains blank and doesn't doesn't display a default value.

But based on that article, I decided to double check the ODBC DSN settings.  We confirmed that the GP DSN was present in the 32-bit ODBC settings window and was using the proper SQL driver, so I was stumped.

On a whim, I decided to confirm the DSN settings, and that's when I saw the likely culprit.


Did you catch that?  See the issue?

Look again:


Notice that there is a sliver of blue before the word Dynamics?

There was a space in front of the DSN name.

The client had manually created their Dynamics GP DSNs, and in the process had accidentally typed a space at the beginning of the name.

We removed the space from the beginning of the DSN name, relaunched GP, and the Server value defaulted just fine.

Apparently Dynamics GP can't handle a DSN that starts with a space.  The DSN will work, but it will never default in the Server field when you launch Dynamics GP.

Whenever I think I've seen all of the GP oddities, a new one pops up right on queue.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter



Installing missing Professional Services Tools Library stored procedures

$
0
0
By Steve Endow

One of my clients was trying to run the Professional Services Tools Library (PSTL) Vendor Modifier tool on GP 2010.  When they tested it in their test database, they received this message:


"Could not find stored procedure smVendorChange1"

We checked the database, and the stored procedure did not exist.  As a test, I created the SQL script for that procedure from my server and we ran it on their database--we received messages indicating that there were other dependencies, so it seems that none of the PSTL procedures were present in this test database.

After doing some reading, I found that PSTL should automatically create its stored procedures.  But it clearly hadn't done so with this database.

We then tried using the SQL Maintenance window to recreate the procedures.


Unfortunately, that didn't appear to do anything.  None of the missing procedures were created.

We then logged into a different company database and opened the PSTL window.  PSTL immediately launched its status window and created the stored procedures.  Hmmm.

Puzzled, we logged back into our test database and launched the PSTL window.  It didn't create the procedures automatically.  Out of curiosity, we clicked on the Register button.  The default registration code was displayed, so that didn't seem to be the issue.


But when we clicked on the OK button for the Registration code window, a status window displayed and PSTL started to install its stored procedures!


I don't know why it didn't automatically install them when it launched, but if you run into a situation where the stored procedures are missing or need to be reinstalled, try clicking on Register, then OK, and it should trigger the reinstall process.

With GP 2013 (or 2013 R2?), the PSTL tools are included and do not require a separate install, so I'm assuming this issue is only relevant for GP 2010.


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter




Bug in Dynamics GP VS Tools Event Handlers for DexUIForm Windows

$
0
0
  By Steve Endow

A few days ago I wrote about an odd behavior in VS Tools where an event handler wouldn't fire if a VS Tools window initiated an event on a GP window.

Specifically, on the GP 2013 Void Historical Payables Transactions window, when you click on the Mark All button, two different dialog boxes appear. (the dialogs are not present on GP 2010)


I needed to suppress those two dialogs.  I wrote an AddIn with an event handler for the BeforeModalDialog event, which worked perfectly when I clicked the Mark All button in GP.

pmVoidPaymentsWindow.BeforeModalDialog += new EventHandler(PmVoidPayments_BeforeModalDialog);

But if I had a VS Tools window "click" that button, the event handler did not fire.  At all.

I was stumped.  After hours of testing and trying various scenarios, I figured it was a bug, and I would have to develop a workaround.  I tried 2 or 3 different workarounds--all of which worked fine in GP, but none of which worked when a VS Tools window clicked Mark All.

For instance, I added VBA script to the window to see if VBA could take care of the dialogs.

Private Sub Window_BeforeModalDialog(ByVal DlgType As DialogType, PromptString As String, Control1String As String, Control2String As String, Control3String As String, Answer As DialogCtrl)

    If Left(PromptString, 61) = "Your selection may include payments that have been reconciled" Then
        Answer = dcButton2
    ElseIf Left(PromptString, 69) = "Your selection may include credit card payments with related invoices" Then
        Answer = dcButton2
    End If
    
End Sub

Once again, this worked fine if I manually clicked the button in GP, but when my VS Tools window clicked the button, the event handler would not work and the dialogs would appear.

I briefly considered trying Continuum, but I couldn't easily figure out what script to use to click through the dialogs.  I momentarily thought of using a macro, but didn't want to go there.

While desperately searching for a solution, I begged for help from Andrew Dean and Tim Wappat, both of whom are expert .NET and GP developers.  They both confirmed the issue and were puzzled.  They came up with a few workarounds to try, but nothing seemed to work.

I then finally resorted to using the Windows API to detect the dialog boxes and click through them, completely outside of the GP dev tools and VS Tools.  After a day of research and testing, I finally got that to work.  It was a kludge, but it did work, literally clicking the Continue button of the dialog boxes as they flashed up on screen.

And then today, Andrew created a VS Tools test app that was able to get the BeforeModalDialog event to fire properly.  Before I had a chance to review it, Tim Wappat had reviewed the code and found the reason:  Andrew had used a standard Windows Form rather than the VS Tools DexUIForm for his test window.

Sure enough, when I modified my test window to inherit from the Form class rather than the DexUIForm class, the BeforeModalDialog event handler worked properly when my window clicked the Mark All button.

public partial class FrmVoidTest : Form  //DexUIForm

So there it was--an apparent bug in the DexUIForm class caused the event handler for the Void Historical Payables window to not fire.  And strangely, it only seems to occur with some windows--for instance, the problem does not occur with the SOP Transaction Entry window.  So there is something about the Void Historical Payables Transactions window that triggers the bug.

Unbelievable.  It took me 3 full days to try several different workarounds and it turns out that changing one word on line line of code was the solution.  It's problems like this that make me frustrated with Dynamics GP.  Incredible waste of time for such a seemingly small problem, yet that problem is critical, and prevents me from completing my project.

The only good thing is that I learned quite a bit about the plumbing of VS Tools and Dynamics GP windows, and I am now also familiar with the Win API code to detect, analyze and automate actions on Windows applications.  But it was a painful journey.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter







Automatically close Dynamics GP report destination windows in VS Tools

$
0
0
By Steve Endow
 
  I am working on a project that involves automating a process in Dynamics GP.  Since there is no API for the particular steps I need to automate, I'm having to use the low-tech method of opening a GP window, populating some field values, and clicking a few buttons.

This is fairly easy to do with Visual Studio Tools for Dynamics GP...in concept.

But, unsurprisingly, Dynamics GP seems to always have to have the last word, and such seemingly simple projects are rarely easy.

In my particular case, after I performed the last button click on the GP window, Report Destination windows would appear to print posting reports.


The process I am automating generates not one, or two, or four, but FIVE report destination windows.  Yes, it is possible to turn off the reports so that the window doesn't appear, but that change would affect many other processes throughout GP where the client might want to actually print some of the posting reports.  It is possible to temporarily update the posting report settings in SQL to disable them, and then quickly re-enable them, but that method runs the risk of disabling at the same time that another user needs the report.

Unfortunately, the Report Destination dialog boxes are not standard modal dialogs that would be detected by the BeforeModalDialog event handler in VS Tools.  So, some "off roading" is required.

When I ran into this problem I was lucky enough to find this excellent post by Paul Maxan on the Code Project web site.

http://www.codeproject.com/Articles/801319/Closing-Microsoft-Dynamics-GP-Report-Destination-w

Paul wrote a great article and did a very nice job crafting code that uses the Windows API to detect the Report Destination dialog box and use windows handles and messages to click through the dialogs.

He posted some very nice code that I packaged up into a separate class that can be easily invoked just about anywhere in your VS Tools project.

During my testing on GP 2013, I found that his combination of TAB + ESC didn't work for some reason.  After some fiddling, I found that ESC by itself did work, so I have used that in my GP 2013 project.

The Report Destination dialog boxes still flash on screen as they are dispatched by Paul's "Closer" code, but it seems to work well and definitely gets the job done.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter




The value of logging and diagnostics in Dynamics GP integrations

$
0
0
By Steve Endow
 
I developed a custom web service for a customer that allows one of their internal web sites to create customers in GP and store customer credit card data in a third party credit card module for GP.

During the testing process, we had to troubleshoot various issues with code, APIs, data, etc.  As part of that process I used my standard logging library, which writes messages and errors to a simple daily text log file on the server.  I used the log to troubleshoot issues, then I resolved them, and eventually the project went live and I pretty much forgot about the detailed logging that I had setup.

Today, the customer did some additional testing on their Test web server, and he said that he encountered a problem.  He sent a screen shot from his web site, but it didn't provide any information or error text.

I then reviewed the GP web service log file and was surprised to see all of the detailed logging, which is automatically enabled in the test environment.

2/5/2015 10:56:18 AM: UpdateCC Instantiating UpdateCCResponse
2/5/2015 10:56:18 AM: UpdateCC Calling ValidateUpdateCCRequestHMAC
2/5/2015 10:56:18 AM: UpdateCC Calling ProcessUpdateCCRequest
2/5/2015 10:56:18 AM: UpdateCC Calling UpdateCustomer
2/5/2015 10:56:23 AM: UpdateCC Calling GetCustomerOpenBalance
2/5/2015 10:56:23 AM: UpdateCC UpdateCC Setting regRequest values
2/5/2015 10:56:23 AM: UpdateCC Getting company ID
2/5/2015 10:56:23 AM: UpdateCC Getting customer profile ID
2/5/2015 10:56:24 AM: UpdateCC Getting CC expiration
2/5/2015 10:56:24 AM: UpdateCC Getting customer payment profile ID
2/5/2015 10:56:24 AM: UpdateCC Getting customer payment profile info
2/5/2015 10:56:24 AM: UpdateCC Updating Authnet payment profile
2/5/2015 10:56:29 AM: UpdateCC Calling ImportCC
2/5/2015 10:56:29 AM: UpdateCC Saving update CC history
2/5/2015 10:56:29 AM: UpdateCC Checking amount paid and approval code
2/5/2015 10:56:29 AM: UpdateCC completed ProcessUpdateCCRequest
2/5/2015 10:56:29 AM: UpdateCC Process UpdateCCRequest returned True
2/5/2015 10:56:29 AM: WARNING: UpdateCC elapsed time: 10.57


Reviewing the entries, there were no errors, and it looks like everything worked properly.  But on the last line, I saw the warning, indicating that the process took over 10 seconds.  That likely explained the problem the customer experienced.  I added a timer to each method in the web service to write to the log any time it takes longer than 10 seconds to process the request.

If you have ever worked with eConnect, you likely know that the first time you call an eConnect method, it can take 5-10 seconds for it to respond.  Once the eConnect service is 'alive', it responds very quickly, but eventually it shuts down again, and there will be a delay when it processes the next request.

Since the log file told me that this was the first request of the day, I suspect that eConnect took several seconds to respond when I updated the customer information, and after 10 seconds, the customer's web site timed out, thinking that the GP request had failed.  Looking at the time stamp in the log, you can see that the eConnect request started at 10:56:18, and was completed 5 seconds later by 10:56:23.  We also see that the call to Authorize.net took another 5 seconds, pushing the process over the 10 second threshold.

The eConnect delay occurred in the test environment because there isn't much regular activity.  In the production environment, we have a process that checks the status of eConnect, SQL, Authorize.net, and the third party credit card module every minute.  Because that process talks with eConnect every minute, the eConnect service always stays 'alive' and we avoid the startup delay.

Just a quick example of the value of logging and some basic diagnostics in a Dynamics GP integration.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter




How do you backup your Hyper-V virtual machines?

$
0
0
By Steve Endow

So, how do you backup your Hyper-V virtual machines?  You DO back them up, don't you?

Today I spent a rare day working on my "data center infrastructure".

Specifically, I was working on my Hyper-V backups.  For the last several years, I've used batch files and VBS scripts to save my VMs and then copy the VHDs to a file server.  While not fancy, this actually worked okay for me for several years.

But since I retired my Windows file server and replaced it with a Synology NAS, I stopped compressing my VHD files, and started to also implement offsite backups as well.  So my VHD files get copied to the Synology, then the Synology has a backup job to copy the VHDs to external drives that are rotated each week.  Since I was no longer compressing the VHD files, my 2TB external drives were filling up, so I had to finally come up with a better system.

While copying VHD files seems adequate for basic situations, there are some challenges.  The obvious downside is compression.  Even if you have plenty of storage, having terabytes of backups sitting around isn't ideal.  And in my case, my limitation was not my main storage, but my external drives that I rotate each week.  In addition to capacity, managing multiple backup versions on the external drives was a hassle.

I looked into using Windows Backup on each VM, but if you attempt to backup to a network share (a NAS, in my case), Windows Backup only supports a single full backup.  It will not do differential backups, and it will not retain multiple backups on a share.  So that option was quickly ruled out.

I then searched for Hyper-V backup software.  After perusing the search results and a few forum posts, I probably found a dozen different names mentioned.  There are a lot of options and I couldn't possibly evaluate them all, or even a half dozen.  But two names seemed to have a focus on Hyper-V, and seemed to have positive feedback.

The two I chose to evaluate are:

  • Veeam Backup & Replication (apparently pronounced like "veem")
  • Altaro Hyper-V Backup


I've done some initial testing of both products, and both vendors offer a free limited version, as well as a full trial version.  Both appear to work well, but they seem to be pretty different products in terms of their features and presumed target market.

Veeam seems to be focused on "higher end" features and enterprise requirements with a full set of configuration options for nearly everything, while Altaro has a very good set of features and a simpler design that can get you running backups in a few minutes with less configuration.

Veeam seems to install a ton of "stuff"--it's installer is over 800 MB, with a 139 MB service pack.  The Veeam solution takes quite a while to install, requiring .NET 4 and SQL Server Express, among other things.  Altaro is a relatively svelte 144 MB, with a very fast, simple installation.  It seems obvious that the Veeam installation is larger and more involved because it has some pretty advanced and powerful features--a few of which I don't understand.  The question I have to answer is whether I want those features and whether the potential overhead is worthwhile.

I don't believe that my comparison of the two products will be focused on whether one product is "better" than the other.  It will be a question of whether they can completely fulfill my current requirements, fulfill any possible future requirements I can think of, how easy they are to use and manage, and how much they cost.

From the list prices I've seen, Veeam is a little bit more expensive than Altaro, but offers some additional feature and more flexibility, such as with the configuration of multiple backups and destinations.  The difference is only a few hundred dollars, so price alone isn't a primary driver for me between the two.

My initial impressions are that, objectively, Altaro is probably the more appropriate solution for me.  While I may like a few of the more advanced features and configurability of Veeam, those items are probably not essential for my business.  Nice to have?  Sure.  Critical?  No.  Overhead that I probably don't need to be thinking about given my simple requirements?  Probably.

But would I like just a little more configurability in Altaro?  Yes.  And while it is probably not important technically, I find Veeam's backup files and folder structure to be fantastically simple, while Altaro produces a messy pile of directories and files that look baffling.

My guess is that this difference is due to how the two manage their backup logs and histories.  Veeam appears to use SQL Server Express, so it can maintain metadata in SQL and have a very clean backup file and folder structure, whereas my guess is that Altaro stores the backup metadata on the backup drive, so that's why you see dat files and GUIDs on the backup drive.  Once I do some restores I suspect I'll have a better feel for the merits of each approach.

I just started evaluating the two products today, so I'm going to compare them over the next two weeks to see which one might be a better fit.


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter




Who deleted my Dynamics GP database table and stored procedure.?

$
0
0
By Steve Endow

An unusual situation has come up at a client.  While reviewing some integration logs, I saw that a custom database table had been deleted from the production Dynamics GP company database.  The integration log recorded an error from a stored procedure--the proc was trying to insert a record into the table, but the table no longer existed.

An unexpected error occurred in SaveUpdateCCHistory: Invalid object name 'cstb_Update_CC_History'.

Since the procedure was still present, but the table wasn't, we were pretty confident the table had been deleted, since you can't create a proc that refers to a non-existent table.

Very strange.  We recreated the table, and moved on.

Then a few days later, while reviewing the integration logs in the Test company database, we saw an error indicating that a call to a custom stored procedure was failing.

An unexpected error occurred in InsertCustomerEmailOptions: Could not find stored procedure 'cssp_New_Customer_Email'

Sure enough, that procedure was missing from the Test database.

These objects have been in both databases for weeks, if not several months, so we hadn't touched them, and certainly didn't have any processes or scripts that would delete them.

The client said they couldn't think of any way the objects would be deleted.

These mysteries are always difficult to research "after the fact".  The ideal solution is to have third party SQL Server auditing software that records such activity and lets you review it later when a problem occurs.  But since such SQL administration is relatively rare with GP clients, we usually have limited tools to research such issues.

After some Googling, I found a few articles on how you can query a database transaction log to determine when a database object was dropped and who dropped it.

But there are two big caveats:

1.  The fn_dblog and  function used is not documented or supported, has been shown to have bugs, and can result in some unexpected consequences.  So you should use it very cautiously.

2.  The fn_dblog and fn_dump_dblog functions read database transaction log activity.  So if the database has had one or more full backups since the drop, you are likely out of luck and will not find any information about the dropped objects.

Technically it is possible to read from a database backup file, but such files typically do not have much log data to work with, so the odds of finding the drop data in a backup file are slim.

Also, technically it is possible to use the functions to read directly from transaction logs, but I don't think I've ever seen a GP client that intentionally or systematically backs up their SQL transaction logs, so that is typically a long shot as well.  Usually, once a full DB backup is performed, the transaction logs get cleared.

But, aside from those rather significant limitations, I was still pretty impressed that it is possible to determine when an object was dropped, and who dropped it.  I'm guessing it will not be very helpful in a real environment where you may not know an object was dropped for a few days, but if you discover the problem quickly, you can try it and see if it works.


Below is the script that I used to test with the TWO database.  I create a test table and insert some rows.  I then backup the database and restore to a different database name.  I then drop the table.

After the table is dropped, I can query the transaction log to see that an object was dropped, who dropped it, but I can't tell which object was dropped (since it no longer exists).



To get the name of the dropped object, you have to restore a backup of the database, then use the dropped object ID to query the database (where the object still exists), to see the object name.




USE [TWO]
GO
--Create table
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[TestTable]') AND type IN (N'U'))
BEGIN
CREATE TABLE [dbo].[TestTable](
[FirstName] [varchar](30) NOT NULL,
[LastName] [varchar](30) NOT NULL,
[RowID] [int] IDENTITY(1,1) NOT NULL
) ON [PRIMARY]
END
GO
--Populate table
INSERT INTO TestTable (FirstName, LastName) VALUES  ('Chuck', 'Norris')
INSERT INTO TestTable (FirstName, LastName) VALUES  ('George', 'Washington')
INSERT INTO TestTable (FirstName, LastName) VALUES  ('Britney', 'Spears')
--Verify table
SELECT * FROM TestTable

--****************************
--BACKUP DATABASE NOW AND RESTORE TO NEW DB
--****************************
--Drop the table in TWO
DROP TABLE TestTable

--Get info on drop from transaction log
SELECT
[Begin Time],
[Transaction Name],
SUSER_SNAME([Transaction SID]) AS UserName,
[Transaction Id],
[Transaction SID],
[SPID],
(SELECT TOP (1) [Lock Information] FROM fn_dblog (NULL, NULL)
WHERE [Transaction Id] = fndb.[Transaction Id]
AND [Lock Information] LIKE '%SCH_M OBJECT%') AS ObjectID
FROM fn_dblog (NULL, NULL) AS fndb
WHERE [Transaction Name] = 'DROPOBJ'
GO

--In the prior query, review the ObjectID field values.  The object ID is the numeric value at the end, in between colons
--HoBt 0:ACQUIRE_LOCK_SCH_M OBJECT: 6:1554182124:0
--In this example, the object ID is:  1554182124

--Insert numeric object ID and run on restored copy of DB
USE TWO01
SELECT OBJECT_NAME(1554182124) AS ObjectName


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter



eConnect will import data with leading spaces: Why should you care?

$
0
0
By Steve Endow 

 One fine day, you joyfully work on a simple Dynamics GP customer query.  You just want to look up the record for a customer.

SELECT*FROMRM00101WHERECUSTNMBR='WEB001'

Couldn't get much simpler.

But when you run the query, you get zero results.


Hmm, that's odd.  You open GP to verify the customer exists.


Yup, there it is in the correct company database.  You double check your query and make sure you are querying the correct database in SQL Server Management Studio--and everything looks okay.

So, what is going on?  How can the customer exist in GP, but not show up in a simple SQL query?

Look a little closer at the customer record in GP.


Well, there's the problem right there.  Do you see it?  

When entering data directly into Dynamics GP, if you try and type a space as the first character in any field, nothing happens.  You just can't enter a field value that starts with a space in the GP user interface.

But I have discovered that eConnect doesn't have such a restriction, and it will dutifully import data values that begin with a space.  It doesn't seem like a big deal, until you try and query that data.

In SQL, these two queries are different:

SELECT * FROM RM00101 WHERE CUSTNMBR = 'WEB001'

SELECT * FROM RM00101 WHERE CUSTNMBR = ' WEB001'

The leading space on the customer ID makes all the difference.  With the first query, I get no results.  With the second query I find the record that has a leading space on the customer ID.

Honestly, I don't know that I actually realized this distinction previously--it just isn't something I have had to try, use, or deal with.  Trimming white space is such second nature for a developer that I can't remember ever thinking about it.  It seems obvious in retrospect, but I think it's just one of those incredibly elementary assumptions that becomes invisible to you after so many years of development. 

When you open the Customer lookup window, you can see that a leading space also affects sorting.


I made up this specific example to demonstrate the issue with eConnect and GP.  In all of my integrations and code, I habitually use a Trim() command on all strings to trim leading and trailing white space, so I was actually quite surprised that eConnect doesn't trim leading spaces.

But this topic came up because of a similar leading space issue showed up on a customer integration this week, and I was quite surprised.

I was surprised because having done software development for about 20 years now, I can't recall encountering this issue before.  While I may have encountered data that had a leading space, my code always trimmed leading and trailing spaces, so a leading space never resulted in a problem.

But in the case of my customer, they had populated a custom table with a few rows that had a leading space.  That leading space prevented a SQL WHERE clause from working, preventing records from being retrieved and imported.

My integration retrieved invoice " ABC123" (with leading space) from the invoice header table in SQL.  It then trimmed the invoice number and queried the invoice line table for all lines related to invoice "ABC123".  As you can guess, the line query retrieved 0 records.

The client and I spent several minutes reviewing the integration and the data trying to figure out why one invoice wouldn't import.  I eventually noticed that the data looked slightly shifted in the SQL query results grid.

It's subtle, but if you have other records above and below, it is much easier to spot.


Once we discovered the leading space in the invoice data, the client removed the spaces and the integration worked fine.

Paradoxically, the issue was caused because my integration code was trimming the invoice number.  Instead of using the actual " ABC123" value from the invoice header table, the trimmed version of "ABC123" caused the problem.  But that turned out to be fortunate, since I now know eConnect would have imported the value with a space, which would have certainly caused headaches in GP later on.

So, the lessons are:

1. Leading spaces in data can be problematic in GP (or just about any database or application)
2. The GP application doesn't allow leading spaces, but eConnect will import them
3. Always trim leading and trailing white space for all of your imported data

Keep learning!


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter



Dynamics GP GPConnNet ReturnCode 131074: One possible cause

$
0
0
By Steve Endow

Last week a customer contacted me saying that an eConnect integration suddenly stopped working.

They have been using the integration for a few years without issue, but without apparent explanation, it started returning an error of "ExecuteNonQuery requires an open and available connection".

The error message is pretty straightforward--the integration was failing to get a connection to SQL Server, but we didn't know why.

The client said that the only change they made was resetting the password for the GP user who was using the integration.  Despite numerous attempts to figure out the cause, I was unable to figure out why GPConnNet was not returning an open database connection.

Well today, while working on a different eConnect integration, I suddenly had the same problem.  GPConnNet was not returning an open connection.

After stepping through my code line by line, I found that GPConnNet was returning a ReturnCode value of 131074.

    GPConnectionGPConnObj = newGPConnection();

    GPConnObj.Init("key1", "key2");
    gpConn.ConnectionString = "DATABASE=" + gpDatabase;
    GPConnObj.Connect(gpConn, gpServer, gpUser, gpPassword);

    //Check for error from GPConnNet
    if ((GPConnObj.ReturnCode & (int)GPConnection.ReturnCodeFlags.SuccessfulLogin) != (int)GPConnection.ReturnCodeFlags.SuccessfulLogin)
    {
        Log.Write("Failed to get SQL connection from GPConnNet", true);
        returnnull;

    }

I found the GPConnNet documentation, which listed the following return codes:

Constant                  Value                Description
SuccessfulLogin           1                    A connection was created
FailedLogin               2                    A connection could not be created
ExceptionCaught           131072 (&H20000)     An exception occurred during the connection attempt
PasswordExpired           65536 (&H10000)      The user’s password has expired



So, I'm not sure what my ReturnCode of 131074 meant.  Is it a combination of FailedLogin and ExceptionCaught? (131072 + 2)  Even if it is, what does that mean?

I knew the GP login was valid, since I had just created it.

And then it dawned on me.  What if I had forgotten to give the user access to the GP company databases?

Sure enough, I had forgotten to give my "gpimport" user access to my company databases.  I normally use the "Copy Access" button on the user window, but I think I got distracted while creating this user and forgot to copy the access.


Once I gave the user access, GPConnNet provided an open database connection.

I'm now wondering if my client has the same issue.  I'll be following up with them to see if they might have created a new company database or a test company database and they didn't grant access to the GP user being used by my integration.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter










Two quick tips on Dynamics GP SQL Server backups

$
0
0
By Steve Endow

I recently learned two interesting things about SQL Server backups from the esteemed Victoria Yudin.

FIRST, I'm embarrassed to say that I didn't know that SQL Server 2008 added a native Backup Compression option.  Admittedly, I'm not a full time SQL DBA and I don't spend much (any?) time on SQL maintenance or backups for the TWO databases on my development servers, but I probably should have picked up on the feature some time in the last SEVEN years, you would think.

Anyway, now that Victoria schooled me, I found that there are two places where you can set backup compression.  The first is at the SQL Server instance level, under Properties -> Database Settings.  Setting this option causes all database backups to compress by default.


The second location where you can select backup compression is under the database backup window on the Options page.


Here you can use the default server setting, or choose to manually enable to disable the backup compression.

I'm now going to enable the option for all of my SQL Servers.

Even if you didn't know about this option, if you have done SQL Server backups, you should already know that the SQL bak files are HIGHLY compressible.  In my humble opinion, you should always compress SQL backup files because of the massive reduction in disk space usage that it provides.

I previously have had customers use WinZip or WinRAR or 7-Zip to compress backups if they needed to send me a copy, and the difference in file size is astounding. (If you are dealing with really large files, 7-Zip offers the best compression, in my experience)  Another thing I've done is set the SQL backup folder to use Windows folder compression.  That works well for backups that only need to sit on disk.  But having SQL Server automatically compress the backup files is the most convenient option, as it also makes file copying or archiving much faster.


SECOND, Victoria and I learned a very interesting lesson about 32-bit vs. 64-bit SQL Server.  She has a customer with a GP 10 install that is running SQL Server on a 64-bit version of Windows.  The server has 24 GB of  RAM and solid state drives, so you would think it should be pretty speedy.  But Victoria noticed that the SQL backups for a 20 GB database took about an hour.  Again, since I don't do much DBA work and all of my test databases are small, I didn't really have any idea of how long it should take to backup a 20 GB database, but Victoria assured me that there was something wrong.

After a few calls with the customer, we learned that the GP 10 SQL Server was running a 32-bit version of SQL 2005.  Obviously the 32-bit version can't begin to utilize the 24 GB of RAM, but we weren't sure how backups were affected by 32-bit vs. 64-bit SQL Server--it has been so long since I've used 32-bit SQL that I no longer have any virtual machines with it installed.  Fortunately, the client had a second test server with identical specs as production, so he installed the 64-bit version of SQL 2005 on that server, restored a copy of the 20 GB production database and then ran a backup.

It took 2 minutes.  TWO MINUTES.  From one hour to two minutes for a 20 GB backup.

Stunning.  I expected some improvement, but wow.  So that told us that the server hardware was just fine--it was the limited memory of 32-bit SQL Server that was causing the long backups.

And I assume that the 64-bit version will also produce some performance improvement for GP users.  They may not have realized that processes were slow, but hopefully they benefit from an upgrade to 64-bit.

So there ya go--there's always something new to learn.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter






Reopening Closed Years!

$
0
0

Okay, so my new year’s resolution has completely gone astray as I have been absent  from this blog for way way way too long.  So here I am, coming out of the post-year end haze, and hope to get back in the habit.  So, I  thought I would start out with some easy stuff that I can share in terms Dynamics GP 2015 and the exciting things  out there for those of you that are considering upgrading (BTW, our new implementations are going to on to GP 2015 and we have started upgrading folks as well).


 
 So, in terms of exciting stuff in GP 2015, how about this little gem?

This is a super exciting one, because it eliminates all  of the time consuming options we had (reversing stuff manually in the tables  or sending the data to Microsoft or yadda yadda yadda). 

So why is this exciting?  Well, we all know that GP let’s you post to the most recent historical year, right?  So, here in 2015, we can post to 2014 (if it is closed) but not 2013.  So, let’s say that we accidentally closed 2014 and we really do need to post to 2013 (this sort of thing happens most commonly during implementation, when you are loading several years of history and closing years in succession).

Now, you can click this lovely little button…
 

And you can opt to reopen the most recently closed year.  Exciting stuff.  Always make a backup first, of course!  All users must be out of the GP when you do this.  Equally exciting is that it will also move analytical accounting information (if you are using it) back to the open year as well.  Once you run the reverse, it is recommended that you reconcile (Utilities-Financial-Reconcile) all open years starting with the oldest.

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a senior managing consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Updating Payroll Transactions for FUTA and SUTA

$
0
0
I had previously posted some scripts to do a compare, to identify transactions where the FUTA and SUTA fields are set differently than the set up records for the pay codes involved.  Remember, FUTA and SUTA are calculated at the time the reports are run (not on a per payroll basis) so that means you can make changes (in the SUTA setup in GP, or in the database) and see the impact on the reports.

--This first script does the compare on FUTA between the transaction and the Employee Pay Code record in Dynamics GP and shows those records where Subject to Futa is marked differently between the Employee Pay Code record and the transaction

select UPR30300.EMPLOYID, UPR30300.CHEKDATE, UPR30300.PAYROLCD, UPR30300.SBJTFUTA, UPR00400.Payrcord, UPR00400.SBJTFUTA from UPR30300 inner join UPR00400 on UPR30300.EMPLOYID=UPR00400.EMPLOYID and UPR30300.PAYROLCD=UPR00400.PAYRCORD where upr30300.SBJTFUTA<>UPR00400.SBJTFUTA

--This script updates the payroll transaction history table to set the Subject to FUTA flag where it is marked on the Employee Pay Code record
update UPR30300 set UPR30300.SBJTFUTA=1 from upr30300 inner join UPR00400 on upr30300.employid=upr00400.employid and upr30300.PAYROLCD=upr00400.PAYRCORD where upr00400.SBJTFUTA=1 and upr30300.SBJTFUTA<>upr00400.SBJTFUTA

Happy correcting! And, as always, make a backup and/or test this out in a test company first!
Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a senior managing consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Get GPConnNet to connect to SQL Server instance with a TCP port number using SQL Alias

$
0
0
By Steve Endow
 
I have a very large Dynamics GP customer who has a very large and complex IT environment with a lot of security.  In their Test Dynamics GP environment, they have the unusual situation where they have a firewall between their Test SQL Server, and their Test Dynamics GP application server.

To make things more complex, they have multiple SQL Server installations and instances on their Test SQL Server.  This, combined with the firewall that is blocking UDP traffic, prevents them from using standard SQL Server instance names and dynamic TCP ports.  In their Test environment, they have been using this naming convention with GP to connect to their Test SQL Server instance through the firewall:

     SERVERNAME\INSTANCENAME,portnumber

SQL Server takes the unusual approach of using a comma to specify a port number, and I admit that this is the first time I had ever seen this connection string format that included the SQL Server port number.

For example:

     globalhqsqlservertest7\instance123,49156

This allows them to connect to the SQL Server instance on port 49156.

Except when that doesn't work.

This server/instance,port format works with SQL Server Management Studio and also works fine with ODBC DSNs, allowing Dynamics GP to work.  (BTW, they indicated that Management Reporter is unable to use the connection string with the port number)

However, what we discovered is that a Dynamics GP integration will not work when the connection string contains a comma.  You will get SQL Connection Error 26:
The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)
No matter what we tried, we couldn't get a connection with the .NET Dynamics GP integration, and only after creating a test SQL connection app was I able to identify that GPConnNet was the culprit, and not a network or firewall issue.

My guess is that GPConnNet is either stripping out the comma/port, or is unable to connect using the comma/port connection string.  A native SQL ADO.NET connection works fine using SQL authentication, but if you have a Dynamics GP related application that relies on GP username and password, you're stuck with GPConnNet.

After fruitlessly testing numerous workarounds, I found a discussion about SQL Server network Aliases.  I've never had to use them previously, but they offer a way to assign a simple name to a SQL Server instance name, including a specific port number.


SQL Aliases are setup in the infrequently used SQL Server Configuration Manager.  If you aren't familiar with the Configuration Manager application, I highly recommend understanding its role and capabilities.  It is a critical tool for troubleshooting SQL Server connectivity issues.

In Configuration Manager, you should see a 32-bit driver and 64-bit driver.  You will want to work with the one that matches your application.  If  you have your .NET app compiled to target x64, you'll use the 64-bit Alias, and vice versa for 32-bit apps.


Creating an Alias is very simple--just give it a name, specify the port, choose the protocol, and then enter the SQL instance name.

After you save the Alias, it should start working immediately without restarting the SQL Server service.

With the 32-bit Alias setup, my test application, using GPConnNet, was finally able to connect to the SQL Server instance on port 49156.


So in the highly unlikely situation where you have to use GPConnNet with a SQL port number, there is a solution!


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter



Source Code Control: Converting from VisualSVN to Git on BitBucket

$
0
0
By Steve Endow

I do a lot of development with Visual Studio, creating eConnect integrations and VS Tools integrations for Microsoft Dynamics GP.

Back in 2012 when I decided to finally setup a source code control solution, Subversion was one of the few source code systems I had heard of, other than the infamous and ancient Microsoft Visual SourceSafe.  I wasn't about to resurrect SourceSafe, and I found the 1.7 GB download for Team Foundation Server made quite a statement, telling me that I shouldn't bother with it.

After some searching, I found VisualSVN--a Windows implementation of the open source Subversion that seamlessly integrated into Visual Studio.  I was up and running in minutes and had a project checked in to my new repository shortly thereafter.

I've been using VisualSVN since February 2012 and love it.  It is simple, relatively obvious, very easy to use for typical tasks, and has been 100% reliable.  I haven't had a single issue with VisualSVN on the server or on my many development servers.  It has been flawless.

Best of all, VisualSVN has a free license for certain qualified users, so you can't beat the price.  For simple, linear development with a small number of users on a project, I highly recommend it.

With VisualSVN, typical day-to-day check-in and check-out operations are very simple and generally require one or two clicks in my single user environment.  However, I've had a few cases where I've needed to revert to older code or research an issue in a prior version, and in those cases, the VisualSVN interface and tools become much less intuitive and less friendly.

I recently had a situation where a client discovered a bug in a version 1.0 production integration after I had already started on making version 2.0 changes and new features to the code.  Version 1.0 had been in production for a few months before the bug was discovered.

It went like this:

  • Develop version 1.0
  • Release version 1.0 to QA environment
  • Test and refine version 1.0
  • Release 1.0 to production environment
  • All is well


  • Develop version 2.0
  • Release version 1.0 to QA environment
  • Test and refine version 2.0


  • Customer finds bug in version 1.0 in production
  • Chaos ensues

So there I was, with my half-baked version 2.0 code released in QA, faced with a bug in version 1.0 that needed to be fixed ASAP.

I had never faced this situation before, so I didn't know how to best handle it.  I knew that I had all of my code and revisions safely checked in VisualSVN, so I knew I could roll back to my 1.0 release code.

But what about my current 2.0 code?  Even though my 2.0 code was checked in, I didn't want to overwrite my current project and roll back to 1.0.  I guess I was afraid that if I pulled down the old 1.0 code and checked in any changes, it would revert my code and mess something up.

My fundamental problem, I later realized, was that short of checking in code, I didn't understand source code control best practices.  Surely there was a way to handle this situation, right?

I posted the question to Experts Exchange and quickly received a response that made total sense in hindsight.  I needed Branching.

I had been developing exclusively against my "trunk" code, in a completely linear manner, without ever branching.  My projects are typically relatively small, easy to manage, and usually don't require branching, but for my current project, the code was moderately complex and had enough features that I really needed to branch with each release, and perhaps with each sub-release.

With that obvious clue in hand, I hit the books.  I downloaded the free SVN Book and read every page.  I needed to understand how to properly branch in SVN / VisualSVN and manage my branches.

The SVN Book is fantastic, providing clear examples in a very easily understood and readable guide.

But I stopped in my tracks when I read this statement:
Once a --reintegrate merge is done from branch to trunk, the branch is no longer usable for further work. It's not able to correctly absorb new trunk changes, nor can it be properly reintegrated to trunk again. For this reason, if you want to keep working on your feature branch, we recommend destroying it and then re-creating it from the trunk.
This refers to the process of merging your "branch" back into the main "trunk" of the code.  Once you perform this merge, the branch is essentially deprecated.  Once the trunk is changed after that point, the branch should no longer be used and should be "destroyed".

What???  So that means I can't persist branches and would have to jump through some hoops if I needed to "back port" some fixes into a prior release.  While the book offers a few workarounds for handling such situations, it appeared that SVN wasn't particularly good at, or really designed for, such "dynamic" branching and back-porting.

The book explained the SVN architecture well, so it was pretty obvious why this limitation existed.  But I was disappointed nonetheless.

I posted another question to Experts Exchange about my concern, asking how to deal with my situation in SVN.  The answer I received wasn't one I was expecting:
You may not like to hear it, but you should switch to Git. 
At first I thought this might be one of those Windows vs. Mac vs. Linux religious opinions, but after a few minutes of reading about Git, the recommendation made a lot of sense.

Git, despite the horrible name, is an amazingly powerful source control system designed to handle the completely crazy development process of the Linux kernel.  If it is good enough Linus Torvalds and Linux kernel development, I have no doubt it would handle my simple needs.

I downloaded the free Pro Git book and started reading that night.  Mind. Blown.

Barney...wait for it...Stinson
Git is source code control and branching and merging on steroids.  Unlike the nice, simple, obvious, and intuitive SVN book, the Git book was harder to follow and digest.  Branch, branch again, branch your branches, then merge, rebranch, merge your merges to your merged merge and then magically just have it all end up in a single cohesive pile of code at the end of the day.

The design and capabilities of Git seemed like some crazy magic.


Despite not fully understanding all of the features that Git provided, it seemed clear that it was more than capable of handling my branching needs.

So, I then needed to figure out how to use it.  I realized that the ubiquity of Git meant that there were quite a few providers of hosted Git servers, so I could outsource my source code control server.  While my VisualSVN server has been virtually zero maintenance, I won't mind having one less VM running.

I looked into GitHub, the well known online code repository.  While it seems to be a great service, there was one significant issue, given my business model.  GitHub charges by "repository", which in Git lingo, is essentially a project.  So if I have 30 clients with an average of 3 projects, I'll have 90 repositories.

Hosted Git services differentiate between a public (open source) and private repository--in my case, all of my client projects will need private repositories, so I would need 90 private repositories.  On GitHub, that would cost me $200 a month.  Gulp.  That would be tough to justify.

Fortunately, there is an alternative service called Bitbucket that has a completely different pricing model that is much better suited to my needs.  Bitbucket charges by user, not by private repository, so I was able to get their free plan with unlimited private repositories and up to 5 users.  Perfect.

(BTW, Atlassian, the provider of Bitbucket, also offers Git server software called Stash that can be installed and hosted on an internal server in case that is appealing)

So now I was interested in using Git and had setup an account on Bitbucket.  How does it work and how to I use it with Visual Studio?

This is one area where I think VisualSVN is a clear winner and Git, well, it's a distant second.  Because Git has a different architecture, a local distributed repository model, more flexibility, and more features, it takes effort to figure out how to use it and how to work with it in Visual Studio.  The local / distributed repository design of Git adds an extra step in the code management process that takes some adjustment when coming from SVN.  But on the flipside, having a local offline repository provides fantastic flexibility for distributed and remote development.

One nice plus is that Visual Studio 2013 has native support for Git, and for online services such as Bitbucket, built right into the Team Explorer window in Visual Studio.


While it is very handy to have the source control built right into Visual Studio, versus a client install and regular updates with VisualSVN, I find the user interface somewhat unintuitive.  I'm slowly learning how to use it, and for now I'm having to regularly refer to my notes every time I have to create a new repository and sync it to Bitbucket.

I'm sure I'll get more comfortable with Git, just as it took me a while to get familiar with VisualSVN, and I hope to take advantage of Git's rich branching and merging functionality.

Whether you consider SVN, Git, or some other solution, one general thing I learned about this project is that source code control is an entire discipline--like networking, system administration, or even software development.  It's one of those things where you really should invest time to learn and understand the discipline in addition to the specific technology that you choose to use.

If I read the Git book a few more times, or a dozen, I hope I'll get more comfortable with the Git features and how to use them.

Now go Git coding!


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter



BadImageFormatException Error after upgrading eConnect import to GP 2015: Don't forget to update your config file!

$
0
0
By Steve Endow

I recently upgraded a .NET eConnect Dynamics GP integration from GP 2013 to GP 2015.

In Dynamics GP 2015, the eConnect libraries were updated to use .NET 4.5.1, so when you upgrade any existing eConnect projects to GP 2015, you will need to use Visual Studio 2012 or higher.  I personally recommend skipping Visual Studio 2012 and going to Visual Studio 2013, which has a better user interface and some nice auto complete features, and also has native support for Git, which I recently discussed.

After you open your project, replace your eConnect references, and update to .NET 4.5.1, you can rebuild and produce your new eConnect integration for Dynamics GP 2015.

Easy peasy lemon squeezee, right?

In theory.

After completing this upgrade process for my integration (a scheduled Console application), I copied my upgraded EXE and DLL files to the client's workstation.  We then tested the integration.  As soon as we tried to run the EXE using Task Scheduler, the import crashed and we got an error message.

The only meaningful error text that was displayed was "BadImageFormatException".

Huh.  When you see the BadImageFormatException error, that almost always means that you have an issue mixing 32-bit and 64-bit libraries or projects.  So if I reference a 64-bit DLL but have my app targeted to 32-bit in Visual Studio, that can cause the BadImageFormat error.

Puzzled, I double checked my settings, and everything was consistently 32-bit.  My projects targeted x86, and I was definitely referencing only 32-bit DLLs. Hmmm.

As a test, I converted my projects and references to 64-bit.  I rebuilt and tested on the client's machine, but we got the same BadImageFormatException error.  Argh.

After some more research, I then opened the Visual Studio Configuration Manager to see if it might be overriding my Target Platform settings and mixing 32-bit and 64-bit elements.  Sure enough, when I opened the window, the settings were a mess--I don't know why, as I normally never touch that window.

I finally figured out how to clean up all of the configuration settings, and thought I had found and resolve the issue, for sure!

All nice and purrrrdy now...
I confidently delivered the new EXE and DLL files and we tested and...

It crashed right away with the same error.

Un. Be. Lievable.

I had run out of ideas.  I finally tried testing via the Command Prompt window to see if I could get any additional error info.

Unhandled Exception:  System.BadImageFormatException: Could not load file or assembly 'myapp.exe' or one of its dependencies.  This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.

Hmmm, this additional information would seem to indicate that the problem is a .NET version issue, and not a typical 32 vs. 64 bit conflict.

Okay...so I went back and triple checked my .NET versions.  My projects were definitely using .NET 4.5.1.  The eConnect libraries were definitely GP 2015 and definitely required .NET 4.5.1, so that wasn't the issue.  A third party library that I was using was definitely 4.5.x compatible.  And I couldn't reproduce the issue on my development machine, so clearly the components worked together.

I then tested my app on...not four...not five...but SIX different computers on my network.  Windows 7, Windows 8, and Server 2008, and Server 2012.  Of course it worked perfectly on all six.  This would seem to rule out a .NET version issue in my application.

Baffled.

I then asked the client if they had another computer we could test with.  Fortunately they did, so we tested on that computer.  I was hoping the issue was workstation specific, and while I might never know the cause of the problem, as long as we could get it working, I would be happy.

We tested on the other computer...but...crash.  BadImageFormatException.

Lacking any other options, I dug in deep.  I downloaded the Sysinternals Process Monitor and the client recorded the activity from the import from launch to crash.  I also ran Process Monitor on my server and recorded the activity when it worked successfully.

Finally, I had a small clue.  It wasn't obvious or specific, but it definitely told me that something was different on the workstation.  I put the two Process Monitor data files side by side in Excel and saw an obvious issue.


My server is on the left.  The client's workstation is on the right.  Notice how my machine is clearly using .NET 4, while the client machine is clearly using .NET 2.  That would explain the error message and the problem.

So I had found the "cause" of the error, but didn't have an explanation as to why this was happening.

The .NET version is set in Visual Studio, and Windows just follows instructions and loads the appropriate .NET version.  It's not like Windows will randomly choose the wrong .NET version.

I then searched for "windows runs wrong .net version" or something like that.  I didn't see any obvious results relating to my issue, but somehow "config file" popped into my head.  Hmmm.

When I provided the new GP 2015 version of the import, I only updated the DLL and EXE files.  The exe.config configuration file was already setup, so we didn't need to make any changes.

And what is contained in a .NET configuration file?

That is an excellent question, I'm glad you asked!

Behold, it is a .NET version setting!

< startup>< supportedRuntime version="v2.0.50727"/ >< /startup>

And so that is what was still in the exe.config file, and that is why Windows was dutifully trying to launch the application with .NET 2.0, which is in turn why the application crashed.

Face.  Palm.


While it isn't an excuse, I at least have an explanation.

I have certainly upgraded integrations before and changed .NET versions, but with upgrades from .NET 2 to .NET 3.5, no change was required in the configuration file--it stayed at v2.0.

But with an upgrade to .NET 4 or higher, the reference in the configuration file needs to be changed.  While I have developed and deployed .NET 4+ integrations, I believe this was the first time that I upgraded an existing integration from .NET 2 to .NET 4.  It just didn't occur to me that the configuration file would have to be updated.

Here is what my configuration file should have looked like:

< startup>< supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.1"/ >< /startup>

As soon as I modified the config file on the client workstation, the GP 2015 integration ran fine.

And I shook my head.

But hey, good news...I eventually figured it out...


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter







One sign of a really bad API

$
0
0
By Steve Endow

The last several months, I've been working with a third party product that integrates with Dynamics GP.

While the product sometimes technically works, it is absolutely riddled with bugs.  These aren't subtle bugs where you need to perform 26 steps with specific data in perfect sequence to trigger them.  These are glaring, obvious, brick-wall-in-the-middle-of-the-road bugs that will cause an error with the simplest data entry example.  In 10 minutes I found a half dozen bugs in the product--and that was on just 1 window.

The product has a web service that serves as the back end to their Dynamics GP integration--so their custom GP windows call the web service, rather than calling DLLs.  That web service also serves as the API if you want to integrate with their product.  Great, makes sense.

After working through numerous bugs in the product, we finally got it working and I got my integration working with it as well.  All seemed peaceful.

Until one day we noticed that some records that were sent to the product through my integration weren't in the database.  Records existed in GP, but didn't exist in the third party product tables.

After being puzzled for several minutes, we eventually tried to manually enter the same data into the product window in GP.  We entered the necessary data and clicked on Save.


So the problem wasn't my integration per se, it was that this particular data was failing to save.  We looked at the data, pondered for a minute, and saw that the data in one of the fields was relatively long--51 characters.

Curious, we made that value shorter and clicked Save.  The save was then successful.  I then opened the diagnostic logs for the web service and saw that it had logged an error.
String or binary data would be truncated
   at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
I then checked the SQL table, and sure enough, the field length was 50 characters.  Our 51 character field value was triggering the error.

Developers should immediately recognize the issue, and alarm bells should be ringing.

This tells us that the developer of the web service API is not validating input values or lengths in their API when they attempt to save data to their own tables.  So if a field is 50 characters and I submit 51 characters, their method will fail with a SQL exception.

Digging through the log file, it looks like they are building SQL statements by concatenating a string.

INSERT INTO SomeTable (TransactionType, Amount, TransactionTime, IsProcessed, CurrencyId, InvoiceNum, PurchaseOrderNum, SubmittedBy, CompanyId, CustomerId, CompanyName, AgreementId)
VALUES (1,10.99,1,'3/23/2015 2:37:08 PM',2,1,1, 'STDINV3588', '', '1', 'sa','Test Company','','-1','TI TESTCO100',2,2,'Test Company', 0)

While there might be some rare situations where I build a SQL statement like this, I do it reluctantly as an exception, and I never do it with user generated data.  The potential for errors from unvalidated data is bad enough, not to mention issues like SQL injection.

What they should be doing is something like this:

SqlParameter[] sqlParameters = newSqlParameter[2];
sqlParameters[0] = newSqlParameter("@CustomerID", System.Data.SqlDbType.VarChar, 15);
sqlParameters[0].Value = gpCustomerID.Trim();
sqlParameters[1] = newSqlParameter("@CompanyID", System.Data.SqlDbType.Int);

sqlParameters[1].Value = gpCompanyID;

In this example, my code is building parameters for a SQL command and is defining data types and maximum value lengths.  Customer ID must be a VarChar of no more than 15 characters, and Company ID must be an integer.  This doesn't handle all of the potential problems of someone submitting a 20 character customer ID (which should be caught and handled earlier in the process), but it at least prevents a SQL truncation exception.

This isn't rocket science and isn't a developer secret.  It's pretty basic .NET development.

After finding this issue, I had to modify my integration code to truncate input values before submitting them to the web service API.


All because the developer of an API for a "commercial" software product that is being sold to Dynamics GP customers for thousands of dollars doesn't know how to validate inputs.  And the "QA department" clearly didn't do a whole lot of testing.

Developer, please pick up the chalk and start writing.



I make coding mistakes all the time, but this clearly isn't a typo or a small mental lapse.  It's a fundamental design issue in the code that correlates well with all of the bugs we have found in the product.

On the plus side, this developer is making me feel like a coding genius.


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter

http://www.precipioservices.com

Clearing Out Old Uninterfaced Fixed Assets GL Activity

$
0
0
It's not unusual for Fixed Assets to get implemented after the initial project for Dynamics GP.  But lately, I have had a few different clients who went live with Fixed Assets but never actually went live.  What do I mean? They started using the module, but for whatever reason (e.g., lack of confidence in the GL accounts used, issues with how items were calculating, not understanding the process) they did not use the GL interface to pass the journal entries from Fixed Assets to the General Ledger.






Of course, if there are issues with the accounts being used, or the calculations, or understanding the process, those need to be addressed. But once they are, what do you do with all of that uninterfaced activity (considering it might be using incorrect accounts)?  Now, if you are lucky-- their won't be much of it and you can just run the GL interface (Routines-Fixed Assets-GL Posting) and delete the resulting batch.  But as luck would have it, I have had a couple instances where there was SO MUCH ACTIVITY that it was taking hours and hours and hours to for the interface to run.  So at the worst, it was locking up the machine and at the best, it was annoying to have to deal with.


So what to do?  Well, let's just update those records in the database so that they think they were previously interfaced.  This approach is surprisingly easy because there is just one table involved, FA00902.






If you do a select on that table, you will see that it contains all of the GL activity records and it has columns for GL information.


  • INTERFACEGL stores a 1 if the record is to be interfaced to the GL
  • GLINTTRXDATE, GLINTDATESTAMP both store 1/1/1900 until the record has been interfaced to the GL, and then these dates are updated
  • GLINTBTCHNUM stores the batch name (FATRX000...) created in the GL
So, with these fields in mind, the following script would update and set the records as interfaced.  Now, keep in mind, you might want further restrictions in your WHERE clause on the FAYEAR or FAPERIOD which are also in the table.




--Confirm records to be updated
SELECT * FROM FA00902 WHERE INTERFACEGL=1 AND GLINTBTCHNUM=' '




--Mark all records as interfaced to GL
UPDATE FA00902 SET GLINTRXDATE='whatever date you want', GLINTDATESTAMP='whatever date you want', GLINTBTCHNUM='CLEAROUT' WHERE INTERFACEGL=1 and GLINTBTCHNUM=' '


As always, please make sure that you have a backup and use the script above to first validate what will be updated before actually doing the update. Happy updating!


Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a senior managing consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.







Viewing all 64797 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>