Monday, February 29, 2016

Check Requests – Don’t Let Paper Take Over Your Life!

We are back this week with another infographic! This time, we show just how much the average company spends when they manually process check requests. Gone are the times when your AP department is up to their necks in paper and taking weeks to get vendors paid. Start saving time and money today by automating your check request process with seamless integration to Dynamics GP and NAV! Check out our short video online today!

dp-check-requests

by DynamicPoint

The post Check Requests – Don’t Let Paper Take Over Your Life! appeared first on SharePoint Blog.


by Mike Marcin, DynamicPoint via SharePoint Blog

DFFS v4.365 released

Finally, after a long BETA period, latest version of Dynamic forms for SharePoint is released.

DFFS v4.365 – February 29, 2016

  • Added support for using DFFS with DocumentSets.
  • Added new trigger “Workflow status” that lets you check the status on a workflow in the current item.
  • Added option to share the Field CSS configuration between different forms in the same list.
  • Added new option to preserve the selected tab when navigating from DispForm to EditForm in a list item. You find this setting in the Misc tab.
  • Split the “showTooltip” out in a separate function to let advanced users override it.
  • Added new functionality to show a list of empty required fields above the form when the save is halted. The list is clickable and will take you to the correct tab / accordion and highlight the field. You can turn this feature on in the Misc tab. Please note that you must update the CSS files also.
  • Added option to set the default “To” and “Cc” in the “E-Mail active tab” feature to a people picker in the current form, or to a fixed email address.
  • Added option to send E-Mails from DFFS rules. You can configure the E-Mail-templates in the new “E-Mail and Print” tab in DFFS backend. Added support for delaying an email until a set “Send date” has been reached. Please note that this is only possible if you use the “Use custom list with workflow to send E-Mails”, and is not when using the built in functionality for sending E-Mails in SP 2013 (REST). You find more information on the help icon in the “E-Mail and Print” functionality in DFFS backend, and in the user manual. You find a video describing the workflow setup in the user manual.
  • Changed the “target” on the link to a document of type “pdf” or “txt” in a DFFS dialog so it will open in a new window.
  • For SP2007: Made the “Return to DispForm when editing an item and NOT opening the form in a dialog” checkbox visible in the Misc tab.
  • Fixed a possible backwards compatibility issue when using the latest version of DFFS with a configuration saved with an older version.
  • Fixed some issues with using “content type” as trigger in a DFFS form.
  • In triggers on date and time columns: added support for comparing with today like this:
    [today]+14 or [today]-5
    

    The number is the number of days you want to offset the value with. This same functionality can be used when comparing with a value pulled from another date and time column in the form – like this:

     {NameOfField}+14 or {NameOfField}-5
    
  • Changed the “debug output” to make it easier to read by collapsing some sections initially.
  • Fixed a bug with hiding a field by a rule while using the accordion functionality.
  • Fixed the “change trigger” on date columns so it will trigger the change event on invalid dates.
  • Changed the backend code for detecting the change of a PeoplePicker field.
  • Added support for using People pickers as trigger in SP 2007 and 2010. This requires that you update SPJS-Utility.js. This may not work 100% in all cases – post any findings in the forum.

You find the full change log here: http://ift.tt/1AglZXU

User manual

The user manual has been updated with the latest changes: http://ift.tt/1KkIp9C

Post any question in the forum

http://ift.tt/1AglYDj

Alexander


by Alexander Bautz via SharePoint JavaScripts

The Real Cost of the Cloud Search Service Application

The Cloud Search Service Application has been in beta since August 2015, and will come Out of the Box with SharePoint Server 2016, and is available with SharePoint Server 2013 August 2015 Public Update or later. The Cloud Search Service application greatly improves the Hybrid Search experience by mixing up your SharePoint On-Premises results with Office 365 results in the same view, instead of the limiting results blocks we had to use with Federated Search.

Cloud Search Service Application

At Ignite last year when Microsoft showed it off to the world for the first time, they used our favorite fictional company Contoso for an example on how they could save money by using this new feature.

Before:

Cloud Search Service Application

After:

Cloud Search Service Application

After implementing the Cloud Search Service Application, Contoso was able to go from 10 servers down to only 2 search servers. Saving 8 SharePoint Server licenses is indeed a very big incentive for Contoso to use this new service! However, until February 17th we had no idea if this service was going to cost us money, and if yes how much. With the latest blog post by Mark Kashman on the Office Blog called Auditing, reporting and storage improvements for SharePoint Online and OneDrive for Business we finally have some real numbers to put on the table.

In the blog post we learned that for every 1TB of pooled storage in SharePoint Online, we are allowed to put one million index items from our On-Premises SharePoint Farm. Let’s break it down to know what this all means.

First, let’s talk about pooled storage. Pooled Storage means the total amount of storage your company has for SharePoint Online. By default, we get 1TB included, plus 0.5 GB per user that we license in Office 365. So for a company with 2000 users, we would have 1TB + (2000 * 0.5 GB) = 2 TB. If we need more space, we can buy the “Office 365 Extra File Storage” Add-on at 0.20$/GB/Month so ~200$/TB/Month.

While most of the companies probably have under one million items in their On-Premises Search Index, which comes free with the included 1TB in Office 365, companies that have maybe 10, 15 million documents in their index will need to calculate their costs. Let’s take Fabrikam for example, another fictional MS company that currently crawls 20 million items with SharePoint 2013 and has 5000 employees.

Fabrikam would have 1TB + (5000*0.5GB) = 3.5TB of space in the cloud included with their subscription, but in order to move all their index in the cloud, they would need an additional 16.5 TB costing them 3300$ per month.

Remember that paying that 3300$ per month will also give them a total of 20 TB to store documents in the cloud, so Fabrikam could move some of their SharePoint sites in the cloud, and the less content that is crawled On-Prem, the less you need to pay since there is no price per document indexed in SharePoint Online.

Also remember that storage prices might change in the future as hardware prices drop, and with cloud competition also increasing, Microsoft might include more storage by default in Office 365.

Even with this price, I still think that the Cloud Search Service application can save money to enterprises and more importantly, provide an a lot better Search Experience for the Business users!

If you are not familiar with the Cloud Search Service Application, I did an overview at CMSWire: SharePoint Cloud Search: What’s in it For You

Do you think the pricing for the Cloud Search Service Application is fair? Looking forward to read your opinions in the comments!

The post The Real Cost of the Cloud Search Service Application appeared first on Absolute SharePoint Blog by Vlad Catrinescu.


by Vlad Catrinescu via Absolute SharePoint Blog by Vlad Catrinescu

Is SharePoint Responsible for Poor SharePoint Adoption?

While Microsoft SharePoint forms a substantial part of the ECM pie, it is quite strange that several CIOs of some enterprises are struggling hard to gain internal acceptance from employees – the real user-base. In concurrent market dynamics, ECM – Enterprise Content Management has become a crucial tool that ensures secure storage, better compliance and enhanced findability of corporate data. However, it is really unfortunate that several enterprises fail to derive any significant value from SharePoint as a content management tool, mainly due to lack of proper understanding.

Even AIIM international, in one of its surveys, mentions that though SharePoint acts as a medium between CMS and enterprises, adoption has always been a challenge for the CIOs.

When the respondents were asked to describe about the development of their SharePoint project:

  • 7% respondents described their project completely successful
  • 11% said that their project was successful, though they encountered several issues
  • Approx. 26% said that they achieved some success at initial stage, however; later the project stalled
  • Approx. 30% said that they struggled hard in meeting original expectations

It is widely believed among the corporate world that SharePoint's failure is basically due to its reputation of being “a jack of all and master of none”.  This may or may not be true, however; this cannot be deemed to be the only reason why SharePoint still struggles to achieve wide acceptance. In fact, there have been numerous instances, where failures in SharePoint projects can be directly attributed to human factors including poor up-front planning, lack of executive enthusiasm for the platform and more.

Contrary to our beliefs, the fault does not lie in Technology – it is all there in our so called highly developed human brains!

Experts who have worked closely on SharePoint strongly believe that the technology is not the real problem. Rather, it is mostly the people, who are to be blamed. Most often, enterprises demonstrate several errors with SharePoint, including:

  • Enterprises leave development and deployment of SharePoint to IT, so when it gets implemented and integrated, the business managers, who were the main users, either have no idea about what is it all about or, they possess partial knowledge.
  • Moreover, short comes in the initial planning stage is considered as one of the main reasons leading to failure in SharePoint deployment.  Lack of thorough research and proper articulation of business requirements makes the situation difficult.
  • Furthermore, figuring out how SharePoint will solve specific problems prior will put the organization in a better position, resulting into preparing training modules.
  • Also, SharePoint is primarily build to be user friendly, with a learning curve. Therefore, employee training modules needs to include an elaborate explanation of the platform' and its purpose in the organization. A demonstration video of how it works, also will it easier to understand.
  • Moreover, once it is deployed, follow-up is extremely important, however, it often neglected. Many employees are slow-learners and may take time to learn SharePoint. They might fall short of best practices or get frustrated with problems which are relatively easy. Demonstrating an interest in the users' experience provides necessary perspective to keep deployment on the right path.

So, which are the major reasons for SharePoint projects getting stalled or its failure?

  • 5% said they were stuck up in the earlier versions or conventional systems
  • 30% said the users never really liked it or used it
  • 35% said that there was lack of proper planning at the very onset of the project
  • 40% said it was due to lack of adequate modules for user training
  • 45% attributed lack of enforcement and endorsement from senior management

Moreover, one of the most critical statements that surfaced was; lack of planning /purchase/deployment of SharePoint in the context information governance strategy. When asked how well-aligned is SharePoint with information governance policies:

  • 12% said that it aligns well with the enterprise’s governance policies
  • 18% said it is not well-aligned
  • 22% said they don’t have much left to do with the governance policies
  • 48% said they still have work to do in alignment of SharePoint and 1G

One very important thing that catches the eye is that around 85% of respondents acknowledged these shortcomings. We all are like drivers, who never ask for directions and get upset when navigators miss the destination. We started rallying against SharePoint, rather than finding the real obstacles.

To sum it up, I would say that the survey indicates how organizations need to delve deeper and find answers for “Why SharePoint deployments aren't working”?

While user training forms a major part of the problem, organizational hurdles like inadequate support from the higher authorities, no or less IT investment, and poor planning play significant role in making SharePoint implementations, unsuccessful. Addressing these shortcomings up-front and concentrating on user training is the best recipe to ensure successful SharePoint adoption.


by Chirag Shivalker via Everyone's Blog Posts - SharePoint Community

Friday, February 26, 2016

Does everyone now know SharePoint?

Part one of a two part SPTechCon recap

While sitting here in Austin Texas, this week presenting and attending SPTechCon, it has been very interesting to see what the popular sessions are. For the ones that I present, they are normally focused on very specific technical topics, which in reality means that the number of people attending them tends to be less than others. I often wondered about this, and wondered what sessions people attend, and what types of topics bring in the big crowds.

read more


by via SharePoint Pro

Is it Mobile First or Cloud First or both?

Part two of a two part SPTechCon recap

As SPTechCon wrapped up, some great things have been presented, demonstrated and discussed. The whole day began with Joel Oleson, Director of Business Development for Konica Minolta Business Solutions USA, walking us through Microsoft’s Mobile Strategy, and what the future holds. Interestingly, the historic view has always been cloud first, and mobile has almost disappeared as a core goal. However, during Joel’s session, it became evident that the model is now Mobile First combine with Cloud First.

read more


by via SharePoint Pro

The three things you need to know before diving into SharePoint 2016

With the emphasis on Office 365 and a longer than expected release cycle, it's understandable that there were some wondering if SharePoint 2016 would ever arrive. It's now firmly on its way, however, with public previews and a rumored mid-March release date.

read more


by via SharePoint Pro

DynamicPoint’s Integration with Projects and Jobs | New Video!

 In our latest video, we highlight the ability for our SharePoint applications to integrate seamlessly with various Dynamics add-on modules. With DynamicPoint you get the expertise of Business Automation in Expense, Requisition and Invoice Automation coupled with the knowledge of just how Dynamics works based on our exclusive integration to Microsoft Dynamics GP & NAV. Our team knows, understands and configures our applications to integrate with the following:

  • Project Accounting
  • Encore Projects
  • Analytical Accounting
  • Job Tracking
  • Dimensions

The product development approach taken to support these integrations lends itself to being highly flexible as to what Dynamics modules or even custom external databases can be integrated. As opposed to taking the approach of supporting one or the other, a concept of “external categories” has been implemented in all three of DynamicPoint’s products. This strategy enables data to be queried directly from Dynamics and either selected or defaulted on the SharePoint web parts within the Expense, Requisition and Invoice Management Applications. These categories can include such items as projects and cost categories, jobs and tasks or analytical accounting codes and dimensions.

projects-and-jobs-integration3-1024x682

by DynamicPoint

The post DynamicPoint’s Integration with Projects and Jobs | New Video! appeared first on SharePoint Blog.


by Mike Marcin, DynamicPoint via SharePoint Blog

PowerShell PSRemoting error on SharePoint servers

Last week I run into very strange issue, which took me days to figure out and resolve it. It is not directly involved with SharePoint, but it can give you a lot of headache.

As a part of much greater project, one of my tasks was to enable PSRemoting on SharePoint server. The problem was that our clients have SharePoint 2010 on Windows 2008R2 servers, on which PSRemoting is not enabled automatically. So we had to enable PSRemoting on all of these servers. In order to save time, we implemented group policy to enable PSRemoting on all servers. Once implemented, I tried to connect to one of SharePoint 2010 servers, let's call it SPTest, and received following message:

Enter-PSSession -ComputerName SPTest

Enter-PSSession : Connecting to remote server SPTest failed with the following error message : WinRM cannot process the request. The following error with errorcode 0x80090311 occurred while using Kerberos authentication: There are currently no logon servers available to service the logon request.
Possible causes are:
-The user name or password specified are invalid.
-Kerberos is used when no authentication method and no user name are specified.
-Kerberos accepts domain user names, but not local user names.
-The Service Principal Name (SPN) for the remote computer name and port does not exist.
-The client and remote computers are in different domains and there is no trust between the two domains.
After checking for the above issues, try the following:
-Check the Event Viewer for events related to authentication.
-Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or use HTTPS transport. Note that computers in the TrustedHosts list might not be authenticated.
-For more information about WinRM configuration, run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. At line:1 char:1 + Enter-PSSession -ComputerName SPTest

My first thought was that something went wrong with configuring WinRM service on server, because I was able to connect to other, non-SharePoint servers. After manually enabling PSRemoting nothing changed, same error message appeared. I searched Event log on SharePoint server and there was nothing which would indicate what went wrong. Also when I tried Test-WSMan cmdlet it stated that everything is properly configured. Very strange.

After looking in Event Viewer on my client machine, I found following error message:

The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server SPTest$. The target name used was HTTP/SPTest. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Ensure that the target SPN is only registered on the account used by the server. This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service. Ensure that the service on the server and the KDC are both configured to use the same password. If the server name is not fully qualified, and the target domain (ADTest.COM) is different from the client domain (ADTest.COM), check if there are identically named server accounts in these two domains, or use the fully-qualified name to identify the server.

Ok, this problem has something to do with Kerberos. But what? SPTest machine is joined into domain, and everything is configured properly. How can the ticket be the problem? After consulting Google found this blog: http://ift.tt/1LOq8mf by Damien Caro, where he explained  that, if you have Application pools running under a domain service account, you may run into same issue, since WinRM seems to check for a HTTP SPN set on the computer object (HTTP\name-of-the-server), it will always fail because it will find the HTTP SPN but not set on the computer object. Thus, it cannot authenticate using Kerberos and just fails. It turned out that this is what was causing the problem from the beginning.

I found two solutions for this issue.

First one is quick and dirty, you just have to add one more DNS A record for you machine, or add record to your host file, adding different name for your server with same IP address. For instance, my server name is SPTest, I added in my client machine host file SPTest_PSRemoting with same IP address. After that I tried Enter-PSSession -ComputerName SPTest_PSRemoting and it worked! However, this solution is applicable just for testing purposes, you don't want to add additional DNS record for all of your servers, or hassle with host files.

The other solution is to configure your Service account to utilize SPN  just for ports used by Web applications. This includes typing few commands in command prompt, but be careful, you first need to make a list of all the ports your IIS server is using for Web applications.

First step was to query SPN, I opened command prompt and typed:   

C:\Windows\system32>setspn -Q http/SPTest
Checking domain DC=ADTest,DC=com
CN=SPS_WebApp,OU=Users,DC=ADTest,DC=com
HTTP/SPTEST.ADTest.com
HTTP/SPTEST
HTTP/report
HTTP/reportstaging
HTTP/reporttest
HTTP/reportdev

Existing SPN found!

You can see that Service account SPS_WebApp has SPN HTTP/SPTEST, and since WinRM seems to check for a HTTP SPN set on the computer object (HTTP\SPTEST), it will always fail because it will find the HTTP SPN but not set on the computer object, but on Service account. Thus, it cannot authenticate using Kerberos and just fails.

To get the WinRM Windows Service remote connection to work, narrow the scope of the SPN to the port number for the website. Rather than adding:

HTTP/SPTest.ADTest.com
use
HTTP/SPTest.ADTest.com:80

In this case the services are hosted in the web site running on port 80. The WinRM service runs on port 5195 by default and therefore does not match on the port specific SPN. Rather than specifying port, it is also possible to use the NETBIOS name for the network host if the fully qualified domain name is used for the SPN.

For instance IIS server on my machine uses 3 ports (80,81 and 85) for different Web Applications.

First I had to add SPN which included the ports:

Setspn -S http/SPTest:80 ADTest\SPS_WebApp
Setspn -S http/SPTest.ADtest,com:80 ADTest\SPS_WebApp
Setspn -S http/SPTest:81 ADTest\SPS_WebApp
Setspn -S http/SPTest.ADtest,com:81 ADTest\SPS_WebApp
Setspn -S http/SPTest:85 ADTest\SPS_WebApp
Setspn -S http/SPTest.ADtest,com:85 ADTest\SPS_WebApp

After that, had to delete SPN:

Setspn -D http/SPTest ADTest\SPS_WebApp
Setspn -D http/SPTest.ADtest,com ADTest\SPS_WebApp

And that's it! Once I checked SPN for http/SPtest and received message that no such SPN was found:

C:\Windows\system32>setspn -Q http/SPTest
Checking domain DC=ADTest,DC=com
No such SPN found.

But after querying SPN with explicit port got the following:

C:\Windows\system32>setspn -Q http/SPTest:80
Checking domain DC=ADTest,DC=com
CN=SPS_WebApp,OU=Users,DC=ADTest,DC=com
HTTP/SPTEST.ADTest.com:80
HTTP/SPTEST:80
HTTP/SPTEST.ADTest.com:81
HTTP/SPTEST:81
HTTP/SPTEST.ADTest.com:85
HTTP/SPTEST:85
HTTP/report
HTTP/reportstaging
HTTP/reporttest
HTTP/reportdev

After this intervention I had no more problems with connecting to SharePoint servers.


by Krsto Savic via Everyone's Blog Posts - SharePoint Community

How to do the Batch Search ExecuteQueries in SharePoint 2013 using Client Side Object Model in C#

In one of the Older Article, we saw within a WebPart, how to execute the ExecuteQueries in a Server Side Coding. Now, I met with the same kind of requirement, but the difference is, here I am executing this search from a WebAPI. Already, we saw here how to create a basic WebAPI.
Let me share the piece of code, which is straight forward. Am not explaining this method as it is a Static and does not have any other external dependencies.

private static List GetTopicDocumentCountBatch(TermCollection docTopicsTermCollection, string locationTermID, ClientContext clientContext)
{
//The List of KeywordQuery which will be converted as an Array later
List keywordQueriesList = new List();
//The List of QueryID which will be converted as an Array later
List queryIdsList = new List();
string contentSiteURL = Convert.ToString(ConfigurationManager.AppSettings["ContentSiteURL"]);
Dictionary<string, string> docTopicQueryID = new Dictionary<string, string>();//Framing the Queries
foreach (Term docTopicTerm in docTopicsTermCollection)
{
KeywordQuery keywordQuery = new KeywordQuery(clientContext);
keywordQuery.QueryText = string.Format("(IsDocument:True OR contentclass:STS_ListItem) Tags:#0{0} GVIDoc:[{1}] SPSiteUrl:" + contentSiteURL + " (ContentTypeId:0x010100458DCE3990BC4C658D4AB1D0CA3B9782* OR ContentTypeId:0x0120D520A808* OR ContentType:GVIarticle)", locationTermID, docTopicTerm.Name); ;
keywordQuery.IgnoreSafeQueryPropertiesTemplateUrl = true;
keywordQuery.SelectProperties.Add("ContentType");
keywordQuery.SelectProperties.Add("ContentTypeId");
keywordQuery.SelectProperties.Add("GVIDoc");
keywordQuery.SourceId = Guid.NewGuid();
keywordQueriesList.Add(keywordQuery);
queryIdsList.Add(Convert.ToString(keywordQuery.SourceId));
docTopicQueryID.Add(Convert.ToString(keywordQuery.SourceId), docTopicTerm.Name);
}
//Convert the KeywordQuery and QueryID into array,
KeywordQuery[] keywordQueries = keywordQueriesList.ToArray();
string[] queryIds = queryIdsList.ToArray();
//Initialize the SearchExecutor
SearchExecutor searchExecutor = new SearchExecutor(clientContext);
//Actual use of ExecuteQueries method
var results = searchExecutor.ExecuteQueries(queryIds, keywordQueries, false);
clientContext.ExecuteQuery();
//Iterating the Result Set.
List docTopicsList = new List();
if (results.Value.Count > 0)
{
foreach (var result in results.Value)
{
if (result.Value[0].ResultRows.Count() > 0)
{
DocTopic docTopic = new DocTopic();docTopic.Title = Convert.ToString(docTopicQueryID[result.Key]);
docTopic.Url = "[" + docTopic.Title + "]";
docTopic.TotalCount = result.Value[0].ResultRows.Count();
docTopic.VideoCount = Convert.ToString(result.Value[0].ResultRows.SelectMany(m => m).Where(k => k.Key.Equals("ContentTypeId")).Select(m => m.Value).Where(y => y.ToString().Contains("0x0120D520A808")).Count());
docTopic.ArticleCount = Convert.ToString(result.Value[0].ResultRows.SelectMany(m => m).Where(k => k.Key.Equals("ContentType")).Select(m => m.Value).Where(y => y.ToString().Contains("GVIarticle")).Count());
docTopic.DocumentCount = Convert.ToString(result.Value[0].ResultRows.SelectMany(m => m).Where(k => k.Key.Equals("ContentTypeId")).Select(m => m.Value).Where(y => y.ToString().Contains("0x010100458DCE3990BC4C658D4AB1D0CA3B9782")).Count());

docTopicsList.Add(docTopic);
}
}
}
return docTopicsList;

}

Happy Coding,
Sathish Nadarajan.


by Sathish Nadarajan via Everyone's Blog Posts - SharePoint Community

Thursday, February 25, 2016

Expense Management for External Users

Does your organization have situations where people submit expense reports to obtain reimbursement but are not employees within the company itself?  At DynamicPoint, we have seen this situation come up in the case of volunteers, vendors, partners or other supporting staff that are incurring business expenses on a company’s behalf. Read more about how DynamicPoint’s Expense Management application enables external users to access an intuitive SharePoint based expense report that is securely accessible to outside resources.

Plus: see our case study on how a Global Professional Association streamlines expense reporting for volunteers and employees by leveraging the benefits of SharePoint Forms Based Authentication and DynamicPoint’s Expense Management solution!

external-user-workflow1-1080x628

by Dynamic Point

The post Expense Management for External Users appeared first on SharePoint Blog.


by Mike Marcin, DynamicPoint via SharePoint Blog

Wednesday, February 24, 2016

SharePoint 2016: What Is in a Patch?

We're all familiar with patches and what they entail, but in SharePoint 2016 the model is changing significantly. In this story we will detail the changes--and examine what patches will be composed of moving forward.

read more


by via SharePoint Pro

How to Read/Import Excel Sheet in SharePoint List using SPServices and JQuery.

Recently I passed through an interesting requirement, had to import Excel sheet in SharePoint List.

Let’s Check how does it works.

For any demonstration I use my favorite web part Script Editor with lovely SPServices and JQuery

1. First Add a Custom List named it ExcelImport.

2. Create five Columns to the ExcelImport List.

First Name (Rename of Title Column), LastName, Position, Location, Country.

3. Now Next step is to add Script Editor Web Part to page.

4. Add Reference of SPServices and JQuery in Script Editor.

  (For frequent use I would suggest you to add reference in your master page)

5. Paste the below code in Script Editor.

<script src="Your JQuery Reference URL"></script>
<script src="Your SPServices Reference URL"></script>
<script>
var excel;
function GetData(cell,row){
if (window.ActiveXObject) {
try {
excel = new ActiveXObject("Excel.Application");
}
catch (e) {
alert (e.message);
}
}
else {
alert ("Opss... inconvenience caused is deeply regretted. Kindly Upload from IE");// Only work in IE Browser
return;
}

var excel_file = excel.Workbooks.Open("D:\MyExcelSheetName.xlsx");//Read excel from D drive by MyExcelSheetName Name
var sht = excel.Worksheets("Sheet1"); // Reading Sheet1 from Worksheets

for(var i=5; i<i+1; i++){ // will start reading data from 5th row
var FirstName = sht.Cells(i,3).Value; // Start reading First Name from 3rd cell
var LastName = sht.Cells(i,5).Value; // Start reading Last Name from 5th cell
var Position = sht.Cells(i,7).Value; // Start reading Position from 7th cell
var Location = sht.Cells(i,9).value; // Start reading Location from 9th cell
var Country = sht.Cells(i,11).Value; //Start reading Country from 11th cell

if (FirstName == undefined || LastName == undefined || Position == undefined || Location == undefined || Country == undefined){ // will stop reading data when excel data end.
break;
}
$().SPServices({
operation: "UpdateListItems",
webURL: _spPageContextInfo.webAbsoluteUrl,
async: false,
batchCmd: "New",
listName: "ExcelImport",
valuepairs: [["Title",FirstName],["LastName",LastName],["Position",Position],["Location",Location],["Country",Country]],
completefunc: function(xData,status){
}
});
}
alert("Data Imported");
excel_file.Close();

}

$(document).ready(function(){
$("#btnload").click(function(){
GetData(1,1);
});
});
</script>
<input type="button" style="float: left;" id="btnload" value="Import Excel"/>

6.Create your Excel file and save it to your D drive by  MyExcelSheetName.xlsx name.

Make sure your Excel Sheet must be in below format if you are using exactly above code.

7. Click on Import Excel button from your page. And your List will be updated with desired result.

Note : If you are getting Error like “Automation server can't create object

Enable Allow ActiveX filtering.

8. Verify your List.


Cheers ...

Thanks.


by Amir Reza via Everyone's Blog Posts - SharePoint Community

Tuesday, February 23, 2016

Download your template file (*.stp) and expand it.

First, download the site template from the solutions gallery to a file, I usually create a directory called solutions for this procedure:

  1. Navigate to the top-level site of your site collection.
  2. Click Settings, and then click Site Settings.

  3. In the Web Designer Galleries section, click Solutions.

  4. To download the solution, click its name in the solutions gallery, and click Save. Then, in the Save As dialog box, browse to the location where you want to save the solution, click Save, and then click Close.

Now, you will rename the file from the command prompt to cabinet file or *.cab file.

Extract the files

I am using 7zip to extract the files from the cabinet file.  I will then place the files in the solutions directory in a directory with the name of the file (in this case it was TemplateX2). 

Note:

All files that are actually just cabinet files (*.cab)

  • .dwp = SharePoint dashboard or web part file
  • .stp =  Sharepoint list or library template
  • .wsp = Windows SharePoint solution file

 


by Tony Di Leonardo via Everyone's Blog Posts - SharePoint Community

Using Content Types to Hide Edit Form Fields in a List

I was recently asked if I could make some list fields read-only because a department was having problems with users filling out fields they weren’t supposed to. They attempted to control this by creating a column called “END OF CUSTOMER INPUT SECTION. INTERNAL USE ONLY BEYOND THIS POINT.” One of the first rules I’ve learned when helping users leverage the power of SharePoint is “What the customer WANTS is not necessarily what the customer NEEDS.” Although the customer was requesting that Permissions be used to control who can edit what fields, I knew that wasn’t the solution, not the least of which because it isn’t even possible to set permissions at the column level. So how did I approach this?

First I looked at the list settings. 112 columns. Yikes! This many columns would be a nightmare to fill out. Imagine creating a new list item and being presented with 112 fields. No wonder users ignored warnings and continued to fill out the form. After filling out a mind-numbing 75 fields what’s another 35 fields?!

Next I asked how this list was used and who were the users who interacted with the list (the audience). The process was explained this way: a user requests a new project by filling out a new list item. The list manager is notified via an Alert that a new project was requested. The list manager assigns the project to a Project Liaison, who works with a finance person and a design person who will input additional information. With 4 user groups identified, we then placed them into four functional areas:

  1. Customer

  2. Finance and Approvals

  3. ADT Design Status

  4. Project Status

What Fields do the Users Need to See?

Now that we have identified 4 user groups, the next question is “What fields do each group need to see?” I asked for a list of column names, in the correct order, for each user group. With that in hand I determined that the best approach to meet the customer’s NEED (only display certain columns to a certain audience) was through the use of Content Types.

Creating Content Types

Here is how to create a Content Type:

  1. Site Settings -> Site content types

  2. Click Create

  3. Give it the name of one of your user groups

  1. Select “List Content Type” and “Item” under Parent Content Type (because we are creating a list Content Type).

  2. When you create your first Content Type, create a new group with a period in front of it (.Demand in the example above). This way the new Content Type will sort to the top of the Site Content Types page. (Thanks to Susan Hanley for this tip!)

  3. Click OK

Do this for each of your user groups.

Configuring Our List to Use Content Types

  1. List Settings -> Advanced settings

  1. Select “Yes” under “Allow management of content types?”

  2. Click OK

Adding Content Types

From the List Settings page, click “Add from existing site content types”

Select .Demand from the dropdown to see the Content Types you just created. Add them one by one. Click OK.

Assign List Columns to Content Types

  1. Click on a Content Type

  2. Click “Add from existing site or list columns”

  1. Select columns from: List Columns

  2. Add the appropriate columns then click OK

  3. Repeat for each Content Type

You can click “Column order” if you need to change the order of a column.

Because the Customer user group will be creating a new list item, make “Customer” the default content type by clicking on “Change new button order and default content type” and changing the Position from Top to 1:

Wrapping Things Up

With the Content Types added, this is the new user experience:

  1. A Customer creates a new Item. They are presented with only the fields they need to fill out (37 fields instead of 112)

  2. The Item is saved as a Customer content type. When a member of another user group edits the item it will open up in the Customer content type (or whatever the content type happened to be when the item was previously edited and saved). Users are trained to select the appropriate Content Type from the dropdown list in order to see their pertinent fields:

The “Item” content type is still available for the list manager who needs access to every field.

To improve the user experience we can add instructions to the default display form:

  1. From the List tab click the Form Web Parts dropdown and select Default Display Form

  2. Add a Content Editor web part and add the appropriate instructions. This is what mine looks like:

Now the user experience is much friendlier with only the necessary fields displayed!


by Darrell Houghton via Everyone's Blog Posts - SharePoint Community

SharePoint App using Type Script and Angular JS

SharePoint App using Type Script and Angular JS

What is Typescript 

  • It adds Static Typing and structuring (class, modules, etc..) to JavaScript.
  • Type Annotations
  • Static type checking
  • Type Definitions
  • Compile-time checking
  • Open Source
  • Supported by angular2 

Strongly typed

Typescript defines the type of the member variable and function parameters. The type will be removed while translation to JavaScript. It enables the compiler to catch type error at compile time and provides IDE intellisense.

Javascript

Typescript

function test(a ,b ){

                return a*b;      

} 

alert(test('one','two'))

function test(a:number,b:number):number{

                return a*b;

}

alert(test(10,20));

Compiler

The code we write in TypeScript is compiled into JavaScript and map file. Map file used to map the JavaScript and TypeScript file lines to debug TypeScript.

Compile:  tsc test.ts à test.js

While compiling the TypeScript code into JavaScript, type annotations are removed.      

var a:number = 3;                            var a = 3;

var b:string = 'abc';                         var b = 'abc';

DefinitelyTyped

The  DefinitelyTyped adds definitions for the existing many JavaScript libraries in the form of interfaces.

  • Describes the types defined in the external libraries (d.ts)
  • Not Deployed only used for development purpose
  • It is used to check the types 

  Alternatives

  • Dart
  • CoffeeScript
  • ClojureScript
  • Fay

Module  & Export keyword

 Modules are same as namespace in C#, to avoid the name collisions and get the functionality of IIFE. 

Typescript with Module 

To make the internal aspects of the module accessible outside of the module we need to declare with export keyword, As the model and service are accessed by the angular controller we used the keyword export. 

Create a sharepoint Hosted App and the necessary typescript definitley file using Nuget Package Manger. 

Edit the project and include the lines

<PropertyGroup>

    <TypeScriptSourceMap>true</TypeScriptSourceMap>

  </PropertyGroup>

<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" />

Save and reload the project.

Include the DefinitelyTyped by Package manager console

  • Install-Package angularjs.TypeScript.DefinitelyTyped

Project.ts  

ProjectService.ts

$Inject

  Without injecting the program works well until the Js gets Minifed. The minification process will probably alter the names of the parameters and it results in unknown inject of the Angular.

The $inject is a special property of AngularJS to determine that what are the services need to be injected at runtime. It should be marked as static and  it is an array of string. The order of the array string and the constructor parameter should be matched. 

ProjectCtrl.ts 

App.ts 

Build the project and combine TS file.

Open the element.xml in the app folder remove all the .ts file path and include the test.js and test.map file. 

Default.aspx

Download Here


by Krishna via Everyone's Blog Posts - SharePoint Community

How to localize Search Display Templates

A quick and easy way to localize strings in a display template is to simply write out the translated values in the page. This way, you don't have to use separate resource files for the client side (e.g. like http://ift.tt/1jLcHKF).

Step 1: Write out the current resource string to a JavaScript variable in the page

string script = String.Format("var rsMyResource = '{0}';", ResourceService.GetLocalizedString("MyResource"));
Page.ClientScript.RegisterClientScriptBlock(typeof(Page), "rsTS", script, true);

Step 2: Insert the varaible in the display template

<span class="srLink"><a href="_#=openUrl=#_">_#=rsMyResource=#_</a></span>

Of course, this is not recommend for a large number of resources.

Get more infos like this daily: http://ift.tt/1SAPJaw


by Oliver Pistor via Everyone's Blog Posts - SharePoint Community

SharePoint Host Named Site Collection Creator CodePlex Update

During the weekend, one of my CodePlex projects that SharePoint Administrators seem to love has received a major update by my colleague Joseph Passineau . For those of you that don’t know what the SharePoint Host Named Site Collection Creator is, here is a small definition from the site:

The SharePoint Host named Site Collection (HNSC) Creator is a Codeplex Project that allows SharePoint Admins to create HNSC via a GUI instead of PowerShell. This project has two ways to be used. One of them is a Windows Forms application that needs no installation, and the second one is a SharePoint 2013 farm solution that plugs in the Central Admin for a native SharePoint experience.

Here are some screenshots:

SharePoint Host Named Site Collection Creator

SharePoint Host Named Site Collection Creator

February 19th, 2016 Update

New release of the SharePoint 2013 Farm Solution

  • Supports choosing a content database.
  • There is a page to manage host header managed path.
  • We also added 2 scripts to make it easier to install the solution or to update it.
  • Fixed bugs

VLADI7 2-22-2016 10.45.59 PM

You can download it on CodePlex at: http://ift.tt/1ggrC8E

The post SharePoint Host Named Site Collection Creator CodePlex Update appeared first on Absolute SharePoint Blog by Vlad Catrinescu.


by Vlad Catrinescu via Absolute SharePoint Blog by Vlad Catrinescu

[Sponsored] Review of Aquaforest Searchlight: Automated OCR Software for SharePoint and Office 365

Review requested by SharePoint-Community.net sponsor Aquaforest, but thoughts are my own

The strong Search engine included in SharePoint is one of the reasons that enterprises around the world embrace SharePoint. Instead of relying on navigation to look for an item or document, users now rely on search every day. Not only in the enterprise but in every day personal use as well. When is the last time you navigated categories on eBay, Wikipedia, Craigslist, etc.?

However, a lot of documents, especially PDFs and some scanned documents are not searchable because even if they are in PDF format, they are simply an image inside a PDF. All this valuable information is not searchable, and search based features including the new DLP in SharePoint 2016 will not be able to function on those documents. In order to be able to search inside those documents you need an OCR (Optical Character Recognition) solution, and that's what we will be reviewing today. We will be reviewing a product called Aquaforest Searchlight that works on SharePoint 2010, 2013 as well as Office 365

  • Audit document stores to determine which documents require processing.
  • Document Stores are monitored to deal with new and updated documents.
  • Dashboard provides a convenient summary of the state of all managed stores.
  • Provides detailed conversion reporting.
  • High Performance Multi-Core Support.
  • Convenient GUI which enables management of all stores via a single interface
  • OCR Support for over 100 languages including Chinese, Korean and Japanese

Review

Aquaforest Searchlight is a client side application, meaning that you don't install anything on SharePoint the server, all the hard work is done on a client computer. Before starting to go in the product, let's take a look at what my goal will be for this review. I uploaded a TIFF file named Dracula that contains an extract of the novel by Bram Stoker.

After leaving it a few hours, I could find the file by the title in Office 365, but when searching for "Munich", Office 365 returned nothing! Let's try to fix that by using Aquaforest Searchlight.

After installing the application and the pre-requisites, we will need to add a Library in Aquaforest. A "Library" does not equal to a document library, it can be an entire Site Collection.

Aquaforest Searchlight

After we click the "Add New Library" button we are guided through a wizard so we can select the exact settings we want for our Library.

On the Library Settings page we have multiple choices

  1. Is it a SharePoint On-Premises, Office 365 or File Share Library that we want to add
  2. Do you only want to Audit, or Audit and OCR. Audit means that Searchlight will analyze how many documents are not searchable, while Audit and OCR will find those documents, and then make them searchable.
  3. We can select the number of cores that we want the application to use. The application will use 1 core / document, so if we give it 10 cores, it can process 10 documents simultaneously. If you plan to OCR thousands, or millions of documents on the first run, maybe it's a good idea to run it on a virtual server for the initial "transformation" and then move a lower performing machine for day to day.
  4. Since the application will of course modify the document in order to make it readable, we can select if we want to turn versioning on if it's off, publish a major version with the new searchable document, and of course describe the check-in comment. The "original" version will be kept as a past version, depending on the versioning rules you have on the library.

We then go to the Document Settings where we can specify the behavior for each document type and filter which documents get OCR'd.

  1. For PDFs we can select if we want to process them if they are already fully searchable, partially or not at all searchable.
  2. For the TIFF files, we can select if we process them, and if we delete the original, as the Searchlight application converts them to PDF files.
  3. We have the same settings for BMP, JPEG and PNG Files.
  4. We can select where the Temp Folder location is. The temp folder is where Searchlight will download files while it does the magic to make them searchable.
  5. We can select a date range for the library, so we don't OCR all the old documents that provide no additional value in the Search Engine.
  6. This is a setting I personally loved seeing there, we can choose to retain all the original metadata on the document. So if a document gets downloaded, and re-uploaded but searchable, those columns will remain the same! With the "Check in Comment" we selected previously, here is what it will look like in the Version History. Modified / Modified By did not change even if the OCR took place a few days later!

After the Document Settings are in place, we can select where we put our Archives, if we decide to keep them of course. The archives are all the original documents before Searchlight made them Searchable.

We then go to the OCR Settings. In the OCR Settings we have two different options we can use, the Aquaforest OCR engine, or the IRIS OCR Engine

I have asked Aquaforest what the difference is, and the main difference is that the Extended OCR Engine works with multiple languages and supports more languages than the Aquaforest OCR one. So if you need to translate documents in more languages, make sure to select the extended choice. Both engines have multiple choices such as rotating the image, or deskewing the documents.

After we select our OCR properties, we can move on to create a schedule for the library

We can either run this job manually, or run it every day or hour, to keep our documents always searchable! After this, we select our Email settings if we want to receive emails when a job is done, or fails.

After we finish and we start the job, the Aquaforest Searchlight tool will first audit the document library and report on its Searchability (How much % of the library is indexable) and then start transforming the documents into Searchable ones.

After the job is done and we wait for the Office 365 crawler to crawl the Site Collection, I could successfully crawl the "Dracula" document and find text from inside it

Conclusion

In this blog post we had an overview of the Aquaforest Searchlight tool that allows enterprises to make their PDF and image documents searchable, in order to provide additional value in SharePoint. I found the Searchlight application really easy to use, and the 10 or so documents I have uploaded have been transformed pretty fast, even if I only gave it one core to do all the OCR. Some things that you will need to be careful about are making sure that your temp and Archive folders have enough space on them if you need to OCR thousands of documents, as every document that gets downloaded can fill up the C: drive pretty fast.

The thing that I loved most is the fact that the application can turn on versioning, while making sure the important metadata such as "Modified/Created By" and "Created / Modified" do not change after a document is transformed. That would have been a deal breaker for most companies where those four columns are of significant importance.

I didn't really find anything I didn't like in the Aquaforest Searchlight tool, as it does everything that it says it does and I am happy to see it work with SharePoint 2010/ 2013 / SharePoint Online, and has been tested with SharePoint 2016 RC so I am sure it will work with it once the product is released!

With Data Loss Prevention becoming a more important topic for every company, and with the DLP Features in SharePoint 2016 / SharePoint online relying on search, having your documents in fully searchable format is a real plus. If you're looking for an OCR Solution for SharePoint, make sure to check out Aquaforest Searchlight by clicking on the logo below:

Aquaforest Searchlight


by Vlad Catrinescu via Everyone's Blog Posts - SharePoint Community